diff --git a/.github/workflows/python-app.yaml b/.github/workflows/python-app.yaml
index 9002ce54c..8e445974c 100644
--- a/.github/workflows/python-app.yaml
+++ b/.github/workflows/python-app.yaml
@@ -5,7 +5,7 @@ on:
branches: [main] # Only main branch
tags: ['v*.*.*']
pull_request:
- branches: [main, dev]
+ branches: [main, 'dev*', 'dev/**', 'feature/**']
types: [opened, synchronize, reopened]
paths-ignore:
- 'docs/**'
@@ -88,7 +88,7 @@ jobs:
uv pip install --system .[dev]
- name: Run tests
- run: pytest -v -p no:warnings --numprocesses=auto
+ run: pytest -v --numprocesses=auto
test-examples:
runs-on: ubuntu-24.04
diff --git a/CHANGELOG.md b/CHANGELOG.md
index df33609d8..b5de41f19 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -41,6 +41,7 @@ Please keep the format of the changelog consistent with the other releases, so t
---
+
## [Unreleased] - ????-??-??
### ✨ Added
@@ -62,12 +63,149 @@ Please keep the format of the changelog consistent with the other releases, so t
### 📝 Docs
### 👷 Development
-- Enable blank issues
### 🚧 Known Issues
+---
Until here -->
+## [3.0.0] - 2025-10-13
+**Summary**: This release introduces new model dimensions (periods and scenarios) for multi-period investments and stochastic modeling, along with a redesigned effect sharing system and enhanced I/O capabilities.
+
+### ✨ Added
+
+**New model dimensions:**
+
+- **Period dimension**: Enables multi-period investment modeling with distinct decisions in each period for transformation pathway optimization
+- **Scenario dimension**: Supports stochastic modeling with weighted scenarios for robust decision-making under uncertainty (demand, prices, weather)
+ - Control variable independence across scenarios via `scenario_independent_sizes` and `scenario_independent_flow_rates` parameters
+ - By default, investment sizes are shared across scenarios while flow rates vary per scenario
+
+**Redesigned effect sharing system:**
+
+Effects now use intuitive `share_from_*` syntax that clearly shows contribution sources:
+
+```python
+costs = fx.Effect('costs', '€', 'Total costs',
+ share_from_temporal={'CO2': 0.2}, # From temporal effects
+ share_from_periodic={'land': 100}) # From periodic effects
+```
+
+This replaces `specific_share_to_other_effects_*` parameters and inverts the direction for clearer relationships.
+
+**Enhanced I/O and data handling:**
+
+- NetCDF/JSON serialization for all Interface objects and FlowSystem with round-trip support
+- FlowSystem manipulation: `sel()`, `isel()`, `resample()`, `copy()`, `__eq__()` methods
+- Direct access to FlowSystem from results without manual restoring (lazily loaded)
+- New `FlowResults` class and precomputed DataArrays for sizes/flow_rates/flow_hours
+- `effects_per_component` dataset for component impact evaluation, including all indirect effects through effect shares
+
+**Other additions:**
+
+- Balanced storage - charging and discharging sizes can be forced equal via `balanced` parameter
+- New Storage parameters: `relative_minimum_final_charge_state` and `relative_maximum_final_charge_state` for final state control
+- Improved filter methods in results
+- Example for 2-stage investment decisions leveraging FlowSystem resampling
+
+### 💥 Breaking Changes
+
+- `relative_minimum_charge_state` and `relative_maximum_charge_state` don't have an extra timestep anymore.
+- Renamed class `SystemModel` to `FlowSystemModel`
+- Renamed class `Model` to `Submodel`
+- Renamed `mode` parameter in plotting methods to `style`
+- Renamed investment binary variable `is_invested` to `invested` in `InvestmentModel`
+- `Calculation.do_modeling()` now returns the `Calculation` object instead of its `linopy.Model`. Callers that previously accessed the linopy model directly should now use `calculation.do_modeling().model` instead of `calculation.do_modeling()`.
+
+### ♻️ Changed
+
+- FlowSystems cannot be shared across multiple Calculations anymore. A copy of the FlowSystem is created instead, making every Calculation independent
+- Each Subcalculation in `SegmentedCalculation` now has its own distinct `FlowSystem` object
+- Type system overhaul - added clear separation between temporal and non-temporal data throughout codebase for better clarity
+- Enhanced FlowSystem interface with improved `__repr__()` and `__str__()` methods
+- Improved Model Structure - Views and organisation is now divided into:
+ - Model: The main Model (linopy.Model) that is used to create and store the variables and constraints for the FlowSystem.
+ - Submodel: The base class for all submodels. Each is a subset of the Model, for simpler access and clearer code.
+- Made docstrings in `config.py` more compact and easier to read
+- Improved format handling in configuration module
+- Enhanced console output to support both `stdout` and `stderr` stream selection
+- Added `show_logger_name` parameter to `CONFIG.Logging` for displaying logger names in messages
+
+### 🗑️ Deprecated
+
+- The `agg_group` and `agg_weight` parameters of `TimeSeriesData` are deprecated and will be removed in a future version. Use `aggregation_group` and `aggregation_weight` instead.
+- The `active_timesteps` parameter of `Calculation` is deprecated and will be removed in a future version. Use the new `sel(time=...)` method on the FlowSystem instead.
+- The assignment of Bus Objects to Flow.bus is deprecated and will be removed in a future version. Use the label of the Bus instead.
+- The usage of Effects objects in Dicts to assign shares to Effects is deprecated and will be removed in a future version. Use the label of the Effect instead.
+- **InvestParameters** parameters renamed for improved clarity around investment and retirement effects:
+ - `fix_effects` → `effects_of_investment`
+ - `specific_effects` → `effects_of_investment_per_size`
+ - `divest_effects` → `effects_of_retirement`
+ - `piecewise_effects` → `piecewise_effects_of_investment`
+- **Effect** parameters renamed:
+ - `minimum_investment` → `minimum_periodic`
+ - `maximum_investment` → `maximum_periodic`
+ - `minimum_operation` → `minimum_temporal`
+ - `maximum_operation` → `maximum_temporal`
+ - `minimum_operation_per_hour` → `minimum_per_hour`
+ - `maximum_operation_per_hour` → `maximum_per_hour`
+- **Component** parameters renamed:
+ - `Source.source` → `Source.outputs`
+ - `Sink.sink` → `Sink.inputs`
+ - `SourceAndSink.source` → `SourceAndSink.outputs`
+ - `SourceAndSink.sink` → `SourceAndSink.inputs`
+ - `SourceAndSink.prevent_simultaneous_sink_and_source` → `SourceAndSink.prevent_simultaneous_flow_rates`
+
+### 🔥 Removed
+
+- **Effect share parameters**: The old `specific_share_to_other_effects_*` parameters were replaced WITHOUT DEPRECATION
+ - `specific_share_to_other_effects_operation` → `share_from_temporal` (with inverted direction)
+ - `specific_share_to_other_effects_invest` → `share_from_periodic` (with inverted direction)
+
+### 🐛 Fixed
+
+- Enhanced NetCDF I/O with proper attribute preservation for DataArrays
+- Improved error handling and validation in serialization processes
+- Better type consistency across all framework components
+- Added extra validation in `config.py` to improve error handling
+
+### 📝 Docs
+
+- Reorganized mathematical notation docs: moved to lowercase `mathematical-notation/` with subdirectories (`elements/`, `features/`, `modeling-patterns/`)
+- Added comprehensive documentation pages: `dimensions.md` (time/period/scenario), `effects-penalty-objective.md`, modeling patterns
+- Enhanced all element pages with implementation details, cross-references, and "See Also" sections
+- Rewrote README and landing page with clearer vision, roadmap, and universal applicability emphasis
+- Removed deprecated `docs/SUMMARY.md`, updated `mkdocs.yml` for new structure
+- Tightened docstrings in core modules with better cross-referencing
+- Added recipes section to docs
+
+### 🚧 Known Issues
+
+- IO for single Interfaces/Elements to Datasets might not work properly if the Interface/Element is not part of a fully transformed and connected FlowSystem. This arises from Numeric Data not being stored as xr.DataArray by the user. To avoid this, always use the `to_dataset()` on Elements inside a FlowSystem that's connected and transformed.
+
+### 👷 Development
+
+- **Centralized deprecation pattern**: Added `_handle_deprecated_kwarg()` helper method to `Interface` base class that provides reusable deprecation handling with consistent warnings, conflict detection, and optional value transformation. Applied across 5 classes (InvestParameters, Source, Sink, SourceAndSink, Effect) reducing deprecation boilerplate by 72%.
+- FlowSystem data management simplified - removed `time_series_collection` pattern in favor of direct timestep properties
+- Change modeling hierarchy to allow for more flexibility in future development. This leads to minimal changes in the access and creation of Submodels and their variables.
+- Added new module `.modeling` that contains modeling primitives and utilities
+- Clearer separation between the main Model and "Submodels"
+- Improved access to the Submodels and their variables, constraints and submodels
+- Added `__repr__()` for Submodels to easily inspect its content
+- Enhanced data handling methods
+ - `fit_to_model_coords()` method for data alignment
+ - `fit_effects_to_model_coords()` method for effect data processing
+ - `connect_and_transform()` method replacing several operations
+- **Testing improvements**: Eliminated warnings during test execution
+ - Updated deprecated code patterns in tests and examples (e.g., `sink`/`source` → `inputs`/`outputs`, `'H'` → `'h'` frequency)
+ - Refactored plotting logic to handle test environments explicitly with non-interactive backends
+ - Added comprehensive warning filters in `__init__.py` and `pyproject.toml` to suppress third-party library warnings
+ - Improved test fixtures with proper figure cleanup to prevent memory leaks
+ - Enhanced backend detection and handling in `plotting.py` for both Matplotlib and Plotly
+ - Always run dependent tests in order
+
+---
+
## [2.2.0] - 2025-10-11
**Summary:** This release is a Configuration and Logging management release.
@@ -77,9 +215,16 @@ Until here -->
- Added configurable log format settings: `CONFIG.Logging.date_format` and `CONFIG.Logging.format`
- Added configurable console settings: `CONFIG.Logging.console_width` and `CONFIG.Logging.show_path`
- Added `CONFIG.Logging.Colors` nested class for customizable log level colors using ANSI escape codes (works with both standard and Rich handlers)
+- All examples now enable console logging to demonstrate proper logging usage
+- Console logging now outputs to `sys.stdout` instead of `sys.stderr` for better compatibility with output redirection
+
+### 💥 Breaking Changes
+- Console logging is now disabled by default (`CONFIG.Logging.console = False`). Enable it explicitly in your scripts with `CONFIG.Logging.console = True` and `CONFIG.apply()`
+- File logging is now disabled by default (`CONFIG.Logging.file = None`). Set a file path to enable file logging
### ♻️ Changed
- Logging and Configuration management changed
+- Improved default logging colors: DEBUG is now gray (`\033[90m`) for de-emphasized messages, INFO uses terminal default color (`\033[0m`) for clean output
### 🗑️ Deprecated
- `change_logging_level()` function is now deprecated in favor of `CONFIG.Logging.level` and `CONFIG.apply()`. Will be removed in version 3.0.0.
diff --git a/MANIFEST.in b/MANIFEST.in
index 72a1ff8eb..383cbef76 100644
--- a/MANIFEST.in
+++ b/MANIFEST.in
@@ -5,7 +5,7 @@ include CHANGELOG.md
include pyproject.toml
# Include package source and data
-recursive-include flixopt *.py *.yaml
+recursive-include flixopt *.py
# Exclude everything else
global-exclude *.pyc *.pyo __pycache__
diff --git a/README.md b/README.md
index edb77e74d..957274f1c 100644
--- a/README.md
+++ b/README.md
@@ -8,84 +8,122 @@
---
-## 🚀 Purpose
+## 🎯 Vision
-**flixopt** is a Python-based optimization framework designed to tackle energy and material flow problems using mixed-integer linear programming (MILP).
+**FlixOpt aims to be the most accessible and flexible Python framework for energy and material flow optimization.**
-**flixopt** bridges the gap between high-level energy systems models like [FINE](https://github.com/FZJ-IEK3-VSA/FINE) used for design and (multi-period) investment decisions and low-level dispatch optimization tools used for operation decisions.
+We believe that optimization modeling should be **approachable for beginners** yet **powerful for experts**. Too often, frameworks force you to choose between ease of use and flexibility. FlixOpt refuses this compromise.
-**flixopt** leverages the fast and efficient [linopy](https://github.com/PyPSA/linopy/) for the mathematical modeling and [xarray](https://github.com/pydata/xarray) for data handling.
+### Where We're Going
-**flixopt** provides a user-friendly interface with options for advanced users.
+**Short-term goals:**
+- **Multi-dimensional modeling**: Full support for multi-period investments and scenario-based stochastic optimization (periods and scenarios are in active development)
+- **Enhanced component library**: More pre-built, domain-specific components (sector coupling, hydrogen systems, thermal networks, demand-side management)
-It was originally developed by [TU Dresden](https://github.com/gewv-tu-dresden) as part of the SMARTBIOGRID project, funded by the German Federal Ministry for Economic Affairs and Energy (FKZ: 03KB159B). Building on the Matlab-based flixOptMat framework (developed in the FAKS project), FlixOpt also incorporates concepts from [oemof/solph](https://github.com/oemof/oemof-solph).
+**Medium-term vision:**
+- **Modeling to generate alternatives (MGA)**: Built-in support for exploring near-optimal solution spaces to produce more robust, diverse solutions under uncertainty
+- **Interactive tutorials**: Browser-based, reactive tutorials for learning FlixOpt without local installation
+- **Standardized cost calculations**: Align with industry standards (VDI 2067) for CAPEX/OPEX calculations
+- **Advanced result analysis**: Time-series aggregation, automated reporting, and rich visualization options
----
+**Long-term vision:**
+- **Showcase universal applicability**: FlixOpt already handles any flow-based system (supply chains, water networks, production planning, chemical processes) - we need more examples and domain-specific component libraries to demonstrate this
+- **Seamless integration**: First-class support for coupling with simulation tools, databases, existing energy system models, and GIS data
+- **Robust optimization**: Built-in uncertainty quantification and stochastic programming capabilities
+- **Community ecosystem**: Rich library of user-contributed components, examples, and domain-specific extensions
+- **Model validation tools**: Automated checks for physical plausibility, data consistency, and common modeling errors
+
+### Why FlixOpt Exists
-## 🌟 Key Features
+FlixOpt is a **general-purpose framework for modeling any system involving flows and conversions** - energy, materials, fluids, goods, or data. While energy systems are our primary focus, the same mathematical foundation applies to supply chains, water networks, production lines, and more.
-- **High-level Interface** with low-level control
- - User-friendly interface for defining flow systems
- - Pre-defined components like CHP, Heat Pump, Cooling Tower, etc.
- - Fine-grained control for advanced configurations
+We bridge the gap between high-level strategic models (like [FINE](https://github.com/FZJ-IEK3-VSA/FINE)) for long-term planning and low-level dispatch tools for operations. FlixOpt is the **sweet spot** for:
-- **Investment Optimization**
- - Combined dispatch and investment optimization
- - Size optimization and discrete investment decisions
- - Combined with On/Off variables and constraints
+- **Researchers** who need to prototype quickly but may require deep customization later
+- **Engineers** who want reliable, tested components without black-box abstractions
+- **Students** learning optimization who benefit from clear, Pythonic interfaces
+- **Practitioners** who need to move from model to production-ready results
+- **Domain experts** from any field where things flow, transform, and need optimizing
-- **Effects, not only Costs --> Multi-criteria Optimization**
- - flixopt abstracts costs as so called 'Effects'. This allows to model costs, CO2-emissions, primary-energy-demand or area-demand at the same time.
- - Effects can interact with each other(e.g., specific CO2 costs)
- - Any of these `Effects` can be used as the optimization objective.
- - A **Weigted Sum** of Effects can be used as the optimization objective.
- - Every Effect can be constrained ($\epsilon$-constraint method).
+Built on modern foundations ([linopy](https://github.com/PyPSA/linopy/) and [xarray](https://github.com/pydata/xarray)), FlixOpt delivers both **performance** and **transparency**. You can inspect everything, extend anything, and trust that your model does exactly what you designed.
-- **Calculation Modes**
- - **Full** - Solve the model with highest accuracy and computational requirements.
- - **Segmented** - Speed up solving by using a rolling horizon.
- - **Aggregated** - Speed up solving by identifying typical periods using [TSAM](https://github.com/FZJ-IEK3-VSA/tsam). Suitable for large models.
+Originally developed at [TU Dresden](https://github.com/gewv-tu-dresden) for the SMARTBIOGRID project (funded by the German Federal Ministry for Economic Affairs and Energy, FKZ: 03KB159B), FlixOpt has evolved from the Matlab-based flixOptMat framework while incorporating the best ideas from [oemof/solph](https://github.com/oemof/oemof-solph).
---
-## 📦 Installation
+## 🌟 What Makes FlixOpt Different
-Install FlixOpt via pip.
-`pip install flixopt`
-With [HiGHS](https://github.com/ERGO-Code/HiGHS?tab=readme-ov-file) included out of the box, flixopt is ready to use..
+### Start Simple, Scale Complex
+Define a working model in minutes with high-level components, then drill down to fine-grained control when needed. No rewriting, no framework switching.
-We recommend installing FlixOpt with all dependencies, which enables additional features like interactive network visualizations ([pyvis](https://github.com/WestHealth/pyvis)) and time series aggregation ([tsam](https://github.com/FZJ-IEK3-VSA/tsam)).
-`pip install "flixopt[full]"`
+```python
+import flixopt as fx
----
+# Simple start
+boiler = fx.Boiler("Boiler", eta=0.9, ...)
+
+# Advanced control when needed - extend with native linopy
+boiler.model.add_constraints(custom_constraint, name="my_constraint")
+```
-## 📚 Documentation
+### Multi-Criteria Optimization Done Right
+Model costs, emissions, resource use, and any custom metric simultaneously as **Effects**. Optimize any single Effect, use weighted combinations, or apply ε-constraints:
-The documentation is available at [https://flixopt.github.io/flixopt/latest/](https://flixopt.github.io/flixopt/latest/)
+```python
+costs = fx.Effect('costs', '€', 'Total costs',
+ share_from_temporal={'CO2': 180}) # 180 €/tCO2
+co2 = fx.Effect('CO2', 'kg', 'Emissions', maximum_periodic=50000)
+```
+
+### Performance at Any Scale
+Choose the right calculation mode for your problem:
+- **Full** - Maximum accuracy for smaller problems
+- **Segmented** - Rolling horizon for large time series
+- **Aggregated** - Typical periods using [TSAM](https://github.com/FZJ-IEK3-VSA/tsam) for massive models
+
+### Built for Reproducibility
+Every result file is self-contained with complete model information. Load it months later and know exactly what you optimized. Export to NetCDF, share with colleagues, archive for compliance.
---
-## 🎯️ Solver Integration
+## 🚀 Quick Start
+
+```bash
+pip install flixopt
+```
-By default, FlixOpt uses the open-source solver [HiGHS](https://highs.dev/) which is installed by default. However, it is compatible with additional solvers such as:
+That's it. FlixOpt comes with the [HiGHS](https://highs.dev/) solver included - you're ready to optimize.
+Many more solvers are supported (gurobi, cplex, cbc, glpk, ...)
-- [Gurobi](https://www.gurobi.com/)
-- [CBC](https://github.com/coin-or/Cbc)
-- [GLPK](https://www.gnu.org/software/glpk/)
-- [CPLEX](https://www.ibm.com/analytics/cplex-optimizer)
+For additional features (interactive network visualization, time series aggregation):
+```bash
+pip install "flixopt[full]"
+```
-For detailed licensing and installation instructions, refer to the respective solver documentation.
+**Next steps:**
+- 📚 [Full Documentation](https://flixopt.github.io/flixopt/latest/)
+- 💡 [Examples](https://flixopt.github.io/flixopt/latest/examples/)
+- 🔧 [API Reference](https://flixopt.github.io/flixopt/latest/api-reference/)
---
-## 🛠 Development Setup
-Look into our docs for [development setup](https://flixopt.github.io/flixopt/latest/contribute/)
+## 🤝 Contributing
+
+FlixOpt thrives on community input. Whether you're fixing bugs, adding components, improving docs, or sharing use cases - we welcome your contributions.
+
+See our [contribution guide](https://flixopt.github.io/flixopt/latest/contribute/) to get started.
---
## 📖 Citation
-If you use FlixOpt in your research or project, please cite the following:
+If FlixOpt supports your research or project, please cite:
- **Main Citation:** [DOI:10.18086/eurosun.2022.04.07](https://doi.org/10.18086/eurosun.2022.04.07)
- **Short Overview:** [DOI:10.13140/RG.2.2.14948.24969](https://doi.org/10.13140/RG.2.2.14948.24969)
+
+---
+
+## 📄 License
+
+MIT License - See [LICENSE](https://github.com/flixopt/flixopt/blob/main/LICENSE) for details.
diff --git a/docs/SUMMARY.md b/docs/SUMMARY.md
deleted file mode 100644
index f66c4e5e5..000000000
--- a/docs/SUMMARY.md
+++ /dev/null
@@ -1,7 +0,0 @@
-- [Home](index.md)
-- [Getting Started](getting-started.md)
-- [User Guide](user-guide/)
-- [Examples](examples/)
-- [Contribute](contribute.md)
-- [API Reference](api-reference/)
-- [Release Notes](changelog/)
diff --git a/docs/images/flixopt-icon.svg b/docs/images/flixopt-icon.svg
index 04a6a6851..08fe340f9 100644
--- a/docs/images/flixopt-icon.svg
+++ b/docs/images/flixopt-icon.svg
@@ -1 +1 @@
-
\ No newline at end of file
+
diff --git a/docs/index.md b/docs/index.md
index 04020639e..2c6420f7f 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -1,24 +1,85 @@
# FlixOpt
-**FlixOpt** is a Python-based optimization framework designed to tackle energy and material flow problems using mixed-integer linear programming (MILP).
+## 🎯 Vision
-It borrows concepts from both [FINE](https://github.com/FZJ-IEK3-VSA/FINE) and [oemof.solph](https://github.com/oemof/oemof-solph).
+**FlixOpt aims to be the most accessible and flexible Python framework for energy and material flow optimization.**
-## Why FlixOpt?
+We believe that optimization modeling should be **approachable for beginners** yet **powerful for experts**. Too often, frameworks force you to choose between ease of use and flexibility. FlixOpt refuses this compromise.
-FlixOpt is designed as a general-purpose optimization framework to get your model running quickly, without sacrificing flexibility down the road:
+### Where We're Going
-- **Easy to Use API**: FlixOpt provides a Pythonic, object-oriented interface that makes mathematical optimization more accessible to Python developers.
+**Short-term goals:**
-- **Approachable Learning Curve**: Designed to be accessible from the start, with options for more detailed models down the road.
+- **Multi-dimensional modeling**: Multi-period investments and scenario-based stochastic optimization are available (periods and scenarios are in active development for enhanced features)
+- **Enhanced component library**: More pre-built, domain-specific components (sector coupling, hydrogen systems, thermal networks, demand-side management)
-- **Domain Independence**: While frameworks like oemof and FINE excel at energy system modeling with domain-specific components, FlixOpt offers a more general mathematical approach that can be applied across different fields.
+**Medium-term vision:**
-- **Extensibility**: Easily add custom constraints or variables to any FlixOpt Model using [linopy](https://github.com/PyPSA/linopy). Tailor any FlixOpt model to your specific needs without loosing the convenience of the framework.
+- **Modeling to generate alternatives (MGA)**: Built-in support for exploring near-optimal solution spaces to produce more robust, diverse solutions under uncertainty
+- **Interactive tutorials**: Browser-based, reactive tutorials for learning FlixOpt without local installation ([marimo](https://marimo.io))
+- **Standardized cost calculations**: Align with industry standards (VDI 2067) for CAPEX/OPEX calculations
+- **Advanced result analysis**: Time-series aggregation, automated reporting, and rich visualization options
+- **Recipe collection**: Community-driven library of common modeling patterns, data manipulation techniques, and optimization strategies (see [Recipes](user-guide/recipes/index.md) - help wanted!)
-- **Solver Agnostic**: Work with different solvers through a consistent interface.
+**Long-term vision:**
-- **Results File I/O**: Built to analyze results independent of running the optimization.
+- **Showcase universal applicability**: FlixOpt already handles any flow-based system (supply chains, water networks, production planning, chemical processes) - we need more examples and domain-specific component libraries to demonstrate this
+- **Seamless integration**: First-class support for coupling with simulation tools, databases, existing energy system models, and GIS data
+- **Robust optimization**: Built-in uncertainty quantification and stochastic programming capabilities
+- **Community ecosystem**: Rich library of user-contributed components, examples, and domain-specific extensions
+- **Model validation tools**: Automated checks for physical plausibility, data consistency, and common modeling errors
+
+### Why FlixOpt Exists
+
+FlixOpt is a **general-purpose framework for modeling any system involving flows and conversions** - energy, materials, fluids, goods, or data. While energy systems are our primary focus, the same mathematical foundation applies to supply chains, water networks, production lines, and more.
+
+We bridge the gap between high-level strategic models (like [FINE](https://github.com/FZJ-IEK3-VSA/FINE)) for long-term planning and low-level dispatch tools for operations. FlixOpt is the **sweet spot** for:
+
+- **Researchers** who need to prototype quickly but may require deep customization later
+- **Engineers** who want reliable, tested components without black-box abstractions
+- **Students** learning optimization who benefit from clear, Pythonic interfaces
+- **Practitioners** who need to move from model to production-ready results
+- **Domain experts** from any field where things flow, transform, and need optimizing
+
+Built on modern foundations ([linopy](https://github.com/PyPSA/linopy/) and [xarray](https://github.com/pydata/xarray)), FlixOpt delivers both **performance** and **transparency**. You can inspect everything, extend anything, and trust that your model does exactly what you designed.
+
+Originally developed at [TU Dresden](https://github.com/gewv-tu-dresden) for the SMARTBIOGRID project (funded by the German Federal Ministry for Economic Affairs and Energy, FKZ: 03KB159B), FlixOpt has evolved from the Matlab-based flixOptMat framework while incorporating the best ideas from [oemof/solph](https://github.com/oemof/oemof-solph).
+
+---
+
+## What Makes FlixOpt Different
+
+### Start Simple, Scale Complex
+Define a working model in minutes with high-level components, then drill down to fine-grained control when needed. No rewriting, no framework switching.
+
+```python
+import flixopt as fx
+
+# Simple start
+boiler = fx.Boiler("Boiler", eta=0.9, ...)
+
+# Advanced control when needed - extend with native linopy
+boiler.model.add_constraints(custom_constraint, name="my_constraint")
+```
+
+### Multi-Criteria Optimization Done Right
+Model costs, emissions, resource use, and any custom metric simultaneously as **Effects**. Optimize any single Effect, use weighted combinations, or apply ε-constraints:
+
+```python
+costs = fx.Effect('costs', '€', 'Total costs',
+ share_from_temporal={'CO2': 180}) # 180 €/tCO2
+co2 = fx.Effect('CO2', 'kg', 'Emissions', maximum_periodic=50000)
+```
+
+### Performance at Any Scale
+Choose the right calculation mode for your problem:
+
+- **Full** - Maximum accuracy for smaller problems
+- **Segmented** - Rolling horizon for large time series
+- **Aggregated** - Typical periods using [TSAM](https://github.com/FZJ-IEK3-VSA/tsam) for massive models
+
+### Built for Reproducibility
+Every result file is self-contained with complete model information. Load it months later and know exactly what you optimized. Export to NetCDF, share with colleagues, archive for compliance.

diff --git a/docs/user-guide/Mathematical Notation/Effects, Penalty & Objective.md b/docs/user-guide/Mathematical Notation/Effects, Penalty & Objective.md
deleted file mode 100644
index 7da311c37..000000000
--- a/docs/user-guide/Mathematical Notation/Effects, Penalty & Objective.md
+++ /dev/null
@@ -1,132 +0,0 @@
-## Effects
-[`Effects`][flixopt.effects.Effect] are used to allocate things like costs, emissions, or other "effects" occurring in the system.
-These arise from so called **Shares**, which originate from **Elements** like [Flows](Flow.md).
-
-**Example:**
-
-[`Flows`][flixopt.elements.Flow] have an attribute called `effects_per_flow_hour`, defining the effect amount of per flow hour.
-Associated effects could be:
-- costs - given in [€/kWh]...
-- ...or emissions - given in [kg/kWh].
--
-Effects are allocated separately for investments and operation.
-
-### Shares to Effects
-
-$$ \label{eq:Share_invest}
-s_{l \rightarrow e, \text{inv}} = \sum_{v \in \mathcal{V}_{l, \text{inv}}} v \cdot \text a_{v \rightarrow e}
-$$
-
-$$ \label{eq:Share_operation}
-s_{l \rightarrow e, \text{op}}(\text{t}_i) = \sum_{v \in \mathcal{V}_{l,\text{op}}} v(\text{t}_i) \cdot \text a_{v \rightarrow e}(\text{t}_i)
-$$
-
-With:
-
-- $\text{t}_i$ being the time step
-- $\mathcal{V_l}$ being the set of all optimization variables of element $e$
-- $\mathcal{V}_{l, \text{inv}}$ being the set of all optimization variables of element $e$ related to investment
-- $\mathcal{V}_{l, \text{op}}$ being the set of all optimization variables of element $e$ related to operation
-- $v$ being an optimization variable of the element $l$
-- $v(\text{t}_i)$ being an optimization variable of the element $l$ at timestep $\text{t}_i$
-- $\text a_{v \rightarrow e}$ being the factor between the optimization variable $v$ to effect $e$
-- $\text a_{v \rightarrow e}(\text{t}_i)$ being the factor between the optimization variable $v$ to effect $e$ for timestep $\text{t}_i$
-- $s_{l \rightarrow e, \text{inv}}$ being the share of element $l$ to the investment part of effect $e$
-- $s_{l \rightarrow e, \text{op}}(\text{t}_i)$ being the share of element $l$ to the operation part of effect $e$
-
-### Shares between different Effects
-
-Furthermore, the Effect $x$ can contribute a share to another Effect ${e} \in \mathcal{E}\backslash x$.
-This share is defined by the factor $\text r_{x \rightarrow e}$.
-
-For example, the Effect "CO$_2$ emissions" (unit: kg)
-can cause an additional share to Effect "monetary costs" (unit: €).
-In this case, the factor $\text a_{x \rightarrow e}$ is the specific CO$_2$ price in €/kg. However, circular references have to be avoided.
-
-The overall sum of investment shares of an Effect $e$ is given by $\eqref{eq:Effect_invest}$
-
-$$ \label{eq:Effect_invest}
-E_{e, \text{inv}} =
-\sum_{l \in \mathcal{L}} s_{l \rightarrow e,\text{inv}} +
-\sum_{x \in \mathcal{E}\backslash e} E_{x, \text{inv}} \cdot \text{r}_{x \rightarrow e,\text{inv}}
-$$
-
-The overall sum of operation shares is given by $\eqref{eq:Effect_Operation}$
-
-$$ \label{eq:Effect_Operation}
-E_{e, \text{op}}(\text{t}_{i}) =
-\sum_{l \in \mathcal{L}} s_{l \rightarrow e, \text{op}}(\text{t}_i) +
-\sum_{x \in \mathcal{E}\backslash e} E_{x, \text{op}}(\text{t}_i) \cdot \text{r}_{x \rightarrow {e},\text{op}}(\text{t}_i)
-$$
-
-and totals to $\eqref{eq:Effect_Operation_total}$
-$$\label{eq:Effect_Operation_total}
-E_{e,\text{op},\text{tot}} = \sum_{i=1}^n E_{e,\text{op}}(\text{t}_{i})
-$$
-
-With:
-
-- $\mathcal{L}$ being the set of all elements in the FlowSystem
-- $\mathcal{E}$ being the set of all effects in the FlowSystem
-- $\text r_{x \rightarrow e, \text{inv}}$ being the factor between the invest part of Effect $x$ and Effect $e$
-- $\text r_{x \rightarrow e, \text{op}}(\text{t}_i)$ being the factor between the operation part of Effect $x$ and Effect $e$
-
-- $\text{t}_i$ being the time step
-- $s_{l \rightarrow e, \text{inv}}$ being the share of element $l$ to the investment part of effect $e$
-- $s_{l \rightarrow e, \text{op}}(\text{t}_i)$ being the share of element $l$ to the operation part of effect $e$
-
-
-The total of an effect $E_{e}$ is given as $\eqref{eq:Effect_Total}$
-
-$$ \label{eq:Effect_Total}
-E_{e} = E_{\text{inv},e} +E_{\text{op},\text{tot},e}
-$$
-
-### Constraining Effects
-
-For each variable $v \in \{ E_{e,\text{inv}}, E_{e,\text{op},\text{tot}}, E_e\}$, a lower bound $v^\text{L}$ and upper bound $v^\text{U}$ can be defined as
-
-$$ \label{eq:Bounds_Single}
-\text v^\text{L} \leq v \leq \text v^\text{U}
-$$
-
-Furthermore, bounds for the operational shares can be set for each time step
-
-$$ \label{eq:Bounds_Time_Steps}
-\text E_{e,\text{op}}^\text{L}(\text{t}_i) \leq E_{e,\text{op}}(\text{t}_i) \leq \text E_{e,\text{op}}^\text{U}(\text{t}_i)
-$$
-
-## Penalty
-
-Additionally to the user defined [Effects](#effects), a Penalty $\Phi$ is part of every FlixOpt Model.
-Its used to prevent unsolvable problems and simplify troubleshooting.
-Shares to the penalty can originate from every Element and are constructed similarly to
-$\eqref{Share_invest}$ and $\eqref{Share_operation}$.
-
-$$ \label{eq:Penalty}
-\Phi = \sum_{l \in \mathcal{L}} \left( s_{l \rightarrow \Phi} +\sum_{\text{t}_i \in \mathcal{T}} s_{l \rightarrow \Phi}(\text{t}_{i}) \right)
-$$
-
-With:
-
-- $\mathcal{L}$ being the set of all elements in the FlowSystem
-- $\mathcal{T}$ being the set of all timesteps
-- $s_{l \rightarrow \Phi}$ being the share of element $l$ to the penalty
-
-At the moment, penalties only occur in [Buses](Bus.md)
-
-## Objective
-
-The optimization objective of a FlixOpt Model is defined as $\eqref{eq:Objective}$
-$$ \label{eq:Objective}
-\min(E_{\Omega} + \Phi)
-$$
-
-With:
-
-- $\Omega$ being the chosen **Objective [Effect](#effects)** (see $\eqref{eq:Effect_Total}$)
-- $\Phi$ being the [Penalty](#penalty)
-
-This approach allows for a multi-criteria optimization using both...
- - ... the **Weighted Sum** method, as the chosen **Objective Effect** can incorporate other Effects.
- - ... the ($\epsilon$-constraint method) by constraining effects.
diff --git a/docs/user-guide/Mathematical Notation/Flow.md b/docs/user-guide/Mathematical Notation/Flow.md
deleted file mode 100644
index 78135e822..000000000
--- a/docs/user-guide/Mathematical Notation/Flow.md
+++ /dev/null
@@ -1,26 +0,0 @@
-The flow_rate is the main optimization variable of the Flow. It's limited by the size of the Flow and relative bounds \eqref{eq:flow_rate}.
-
-$$ \label{eq:flow_rate}
- \text P \cdot \text p^{\text{L}}_{\text{rel}}(\text{t}_{i})
- \leq p(\text{t}_{i}) \leq
- \text P \cdot \text p^{\text{U}}_{\text{rel}}(\text{t}_{i})
-$$
-
-With:
-
-- $\text P$ being the size of the Flow
-- $p(\text{t}_{i})$ being the flow-rate at time $\text{t}_{i}$
-- $\text p^{\text{L}}_{\text{rel}}(\text{t}_{i})$ being the relative lower bound (typically 0)
-- $\text p^{\text{U}}_{\text{rel}}(\text{t}_{i})$ being the relative upper bound (typically 1)
-
-With $\text p^{\text{L}}_{\text{rel}}(\text{t}_{i}) = 0$ and $\text p^{\text{U}}_{\text{rel}}(\text{t}_{i}) = 1$,
-equation \eqref{eq:flow_rate} simplifies to
-
-$$
- 0 \leq p(\text{t}_{i}) \leq \text P
-$$
-
-
-This mathematical formulation can be extended by using [OnOffParameters](./OnOffParameters.md)
-to define the on/off state of the Flow, or by using [InvestParameters](./InvestParameters.md)
-to change the size of the Flow from a constant to an optimization variable.
diff --git a/docs/user-guide/Mathematical Notation/InvestParameters.md b/docs/user-guide/Mathematical Notation/InvestParameters.md
deleted file mode 100644
index d3cd4f81e..000000000
--- a/docs/user-guide/Mathematical Notation/InvestParameters.md
+++ /dev/null
@@ -1,3 +0,0 @@
-# InvestParameters
-
-This is a work in progress.
diff --git a/docs/user-guide/Mathematical Notation/LinearConverter.md b/docs/user-guide/Mathematical Notation/LinearConverter.md
deleted file mode 100644
index bf1279c32..000000000
--- a/docs/user-guide/Mathematical Notation/LinearConverter.md
+++ /dev/null
@@ -1,21 +0,0 @@
-[`LinearConverters`][flixopt.components.LinearConverter] define a ratio between incoming and outgoing [Flows](Flow.md).
-
-$$ \label{eq:Linear-Transformer-Ratio}
- \sum_{f_{\text{in}} \in \mathcal F_{in}} \text a_{f_{\text{in}}}(\text{t}_i) \cdot p_{f_\text{in}}(\text{t}_i) = \sum_{f_{\text{out}} \in \mathcal F_{out}} \text b_{f_\text{out}}(\text{t}_i) \cdot p_{f_\text{out}}(\text{t}_i)
-$$
-
-With:
-
-- $\mathcal F_{in}$ and $\mathcal F_{out}$ being the set of all incoming and outgoing flows
-- $p_{f_\text{in}}(\text{t}_i)$ and $p_{f_\text{out}}(\text{t}_i)$ being the flow-rate at time $\text{t}_i$ for flow $f_\text{in}$ and $f_\text{out}$, respectively
-- $\text a_{f_\text{in}}(\text{t}_i)$ and $\text b_{f_\text{out}}(\text{t}_i)$ being the ratio of the flow-rate at time $\text{t}_i$ for flow $f_\text{in}$ and $f_\text{out}$, respectively
-
-With one incoming **Flow** and one outgoing **Flow**, this can be simplified to:
-
-$$ \label{eq:Linear-Transformer-Ratio-simple}
- \text a(\text{t}_i) \cdot p_{f_\text{in}}(\text{t}_i) = p_{f_\text{out}}(\text{t}_i)
-$$
-
-where $\text a$ can be interpreted as the conversion efficiency of the **LinearConverter**.
-#### Piecewise Conversion factors
-The conversion efficiency can be defined as a piecewise linear approximation. See [Piecewise](Piecewise.md) for more details.
diff --git a/docs/user-guide/Mathematical Notation/OnOffParameters.md b/docs/user-guide/Mathematical Notation/OnOffParameters.md
deleted file mode 100644
index ca22d7d33..000000000
--- a/docs/user-guide/Mathematical Notation/OnOffParameters.md
+++ /dev/null
@@ -1,3 +0,0 @@
-# OnOffParameters
-
-This is a work in progress.
diff --git a/docs/user-guide/Mathematical Notation/index.md b/docs/user-guide/Mathematical Notation/index.md
deleted file mode 100644
index b76a1ba1f..000000000
--- a/docs/user-guide/Mathematical Notation/index.md
+++ /dev/null
@@ -1,22 +0,0 @@
-
-# Mathematical Notation
-
-## Naming Conventions
-
-FlixOpt uses the following naming conventions:
-
-- All optimization variables are denoted by italic letters (e.g., $x$, $y$, $z$)
-- All parameters and constants are denoted by non italic small letters (e.g., $\text{a}$, $\text{b}$, $\text{c}$)
-- All Sets are denoted by greek capital letters (e.g., $\mathcal{F}$, $\mathcal{E}$)
-- All units of a set are denoted by greek small letters (e.g., $\mathcal{f}$, $\mathcal{e}$)
-- The letter $i$ is used to denote an index (e.g., $i=1,\dots,\text n$)
-- All time steps are denoted by the letter $\text{t}$ (e.g., $\text{t}_0$, $\text{t}_1$, $\text{t}_i$)
-
-## Timesteps
-Time steps are defined as a sequence of discrete time steps $\text{t}_i \in \mathcal{T} \quad \text{for} \quad i \in \{1, 2, \dots, \text{n}\}$ (left-aligned in its timespan).
-From this sequence, the corresponding time intervals $\Delta \text{t}_i \in \Delta \mathcal{T}$ are derived as
-
-$$\Delta \text{t}_i = \text{t}_{i+1} - \text{t}_i \quad \text{for} \quad i \in \{1, 2, \dots, \text{n}-1\}$$
-
-The final time interval $\Delta \text{t}_\text n$ defaults to $\Delta \text{t}_\text n = \Delta \text{t}_{\text n-1}$, but is of course customizable.
-Non-equidistant time steps are also supported.
diff --git a/docs/user-guide/index.md b/docs/user-guide/index.md
index bc1738997..df97bf768 100644
--- a/docs/user-guide/index.md
+++ b/docs/user-guide/index.md
@@ -1,6 +1,8 @@
# FlixOpt Concepts
-FlixOpt is built around a set of core concepts that work together to represent and optimize energy and material flow systems. This page provides a high-level overview of these concepts and how they interact.
+FlixOpt is built around a set of core concepts that work together to represent and optimize **any system involving flows and conversions** - whether that's energy systems, material flows, supply chains, water networks, or production processes.
+
+This page provides a high-level overview of these concepts and how they interact.
## Core Concepts
@@ -45,28 +47,49 @@ Examples:
### Components
-[`Component`][flixopt.elements.Component] objects usually represent physical entities in your system that interact with [`Flows`][flixopt.elements.Flow]. They include:
+[`Component`][flixopt.elements.Component] objects usually represent physical entities in your system that interact with [`Flows`][flixopt.elements.Flow]. The generic component types work across all domains:
- [`LinearConverters`][flixopt.components.LinearConverter] - Converts input flows to output flows with (piecewise) linear relationships
+ - *Energy: boilers, heat pumps, turbines*
+ - *Manufacturing: assembly lines, processing equipment*
+ - *Chemistry: reactors, separators*
- [`Storages`][flixopt.components.Storage] - Stores energy or material over time
-- [`Sources`][flixopt.components.Source] / [`Sinks`][flixopt.components.Sink] / [`SourceAndSinks`][flixopt.components.SourceAndSink] - Produce or consume flows. They are usually used to model external demands or supplies.
+ - *Energy: batteries, thermal storage, gas storage*
+ - *Logistics: warehouses, buffer inventory*
+ - *Water: reservoirs, tanks*
+- [`Sources`][flixopt.components.Source] / [`Sinks`][flixopt.components.Sink] / [`SourceAndSinks`][flixopt.components.SourceAndSink] - Produce or consume flows
+ - *Energy: demands, renewable generation*
+ - *Manufacturing: raw material supply, product demand*
+ - *Supply chain: suppliers, customers*
- [`Transmissions`][flixopt.components.Transmission] - Moves flows between locations with possible losses
-- Specialized [`LinearConverters`][flixopt.components.LinearConverter] like [`Boilers`][flixopt.linear_converters.Boiler], [`HeatPumps`][flixopt.linear_converters.HeatPump], [`CHPs`][flixopt.linear_converters.CHP], etc. These simplify the usage of the `LinearConverter` class and can also be used as blueprint on how to define custom classes or parameterize existing ones.
+ - *Energy: pipelines, power lines*
+ - *Logistics: transport routes*
+ - *Water: distribution networks*
+
+**Pre-built specialized components** for energy systems include [`Boilers`][flixopt.linear_converters.Boiler], [`HeatPumps`][flixopt.linear_converters.HeatPump], [`CHPs`][flixopt.linear_converters.CHP], etc. These can serve as blueprints for custom domain-specific components.
### Effects
-[`Effect`][flixopt.effects.Effect] objects represent impacts or metrics related to your system, such as:
+[`Effect`][flixopt.effects.Effect] objects represent impacts or metrics related to your system. While commonly used to allocate costs, they're completely flexible:
+**Energy systems:**
- Costs (investment, operation)
- Emissions (CO₂, NOx, etc.)
-- Resource consumption
-- Area demand
+- Primary energy consumption
+
+**Other domains:**
+- Production time, labor hours (manufacturing)
+- Water consumption, wastewater (process industries)
+- Transport distance, vehicle utilization (logistics)
+- Space consumption
+- Any custom metric relevant to your domain
These can be freely defined and crosslink to each other (`CO₂` ──[specific CO₂-costs]─→ `Costs`).
One effect is designated as the **optimization objective** (typically Costs), while others can be constrained.
-This approach allows for a multi-criteria optimization using both...
- - ... the **Weigted Sum**Method, by Optimizing a theoretical Effect which other Effects crosslink to.
- - ... the ($\epsilon$-constraint method) by constraining effects.
+This approach allows for multi-criteria optimization using both:
+
+ - **Weighted Sum Method**: Optimize a theoretical Effect which other Effects crosslink to
+ - **ε-constraint method**: Constrain effects to specific limits
### Calculation
diff --git a/docs/user-guide/mathematical-notation/dimensions.md b/docs/user-guide/mathematical-notation/dimensions.md
new file mode 100644
index 000000000..d1bc99c8e
--- /dev/null
+++ b/docs/user-guide/mathematical-notation/dimensions.md
@@ -0,0 +1,264 @@
+# Dimensions
+
+FlixOpt's `FlowSystem` supports multiple dimensions for modeling optimization problems. Understanding these dimensions is crucial for interpreting the mathematical formulations presented in this documentation.
+
+## The Three Dimensions
+
+FlixOpt models can have up to three dimensions:
+
+1. **Time (`time`)** - **MANDATORY**
+ - Represents the temporal evolution of the system
+ - Defined via `pd.DatetimeIndex`
+ - Must contain at least 2 timesteps
+ - All optimization variables and constraints evolve over time
+2. **Period (`period`)** - **OPTIONAL**
+ - Represents independent planning periods (e.g., years 2020, 2021, 2022)
+ - Defined via `pd.Index` with integer values
+ - Used for multi-period optimization such as investment planning across years
+ - Each period is independent with its own time series
+3. **Scenario (`scenario`)** - **OPTIONAL**
+ - Represents alternative futures or uncertainty realizations (e.g., "Base Case", "High Demand")
+ - Defined via `pd.Index` with any labels
+ - Scenarios within the same period share the same time dimension
+ - Used for stochastic optimization or scenario comparison
+
+---
+
+## Dimensional Structure
+
+**Coordinate System:**
+
+```python
+FlowSystemDimensions = Literal['time', 'period', 'scenario']
+
+coords = {
+ 'time': pd.DatetimeIndex, # Always present
+ 'period': pd.Index | None, # Optional
+ 'scenario': pd.Index | None # Optional
+}
+```
+
+**Example:**
+```python
+import pandas as pd
+import numpy as np
+import flixopt as fx
+
+timesteps = pd.date_range('2020-01-01', periods=24, freq='h')
+scenarios = pd.Index(['Base Case', 'High Demand'])
+periods = pd.Index([2020, 2021, 2022])
+
+flow_system = fx.FlowSystem(
+ timesteps=timesteps,
+ periods=periods,
+ scenarios=scenarios,
+ weights=np.array([0.5, 0.5]) # Scenario weights
+)
+```
+
+This creates a system with:
+- 24 time steps per scenario per period
+- 2 scenarios with equal weights (0.5 each)
+- 3 periods (years)
+- **Total decision space:** 24 × 2 × 3 = 144 time-scenario-period combinations
+
+---
+
+## Independence of Formulations
+
+**All mathematical formulations in this documentation are independent of whether periods or scenarios are present.**
+
+The equations shown throughout this documentation (for [Flow](elements/Flow.md), [Storage](elements/Storage.md), [Bus](elements/Bus.md), etc.) are written with only the time index $\text{t}_i$. When periods and/or scenarios are added, **the same equations apply** - they are simply expanded to additional dimensions.
+
+### How Dimensions Expand Formulations
+
+**Flow rate bounds** (from [Flow](elements/Flow.md)):
+
+$$
+\text{P} \cdot \text{p}^{\text{L}}_{\text{rel}}(\text{t}_{i}) \leq p(\text{t}_{i}) \leq \text{P} \cdot \text{p}^{\text{U}}_{\text{rel}}(\text{t}_{i})
+$$
+
+This equation remains valid regardless of dimensions:
+
+| Dimensions Present | Variable Indexing | Interpretation |
+|-------------------|-------------------|----------------|
+| Time only | $p(\text{t}_i)$ | Flow rate at time $\text{t}_i$ |
+| Time + Scenario | $p(\text{t}_i, s)$ | Flow rate at time $\text{t}_i$ in scenario $s$ |
+| Time + Period | $p(\text{t}_i, y)$ | Flow rate at time $\text{t}_i$ in period $y$ |
+| Time + Period + Scenario | $p(\text{t}_i, y, s)$ | Flow rate at time $\text{t}_i$ in period $y$, scenario $s$ |
+
+**The mathematical relationship remains identical** - only the indexing expands.
+
+---
+
+## Independence Between Scenarios and Periods
+
+**There is no interconnection between scenarios and periods, except for shared investment decisions within a period.**
+
+### Scenario Independence
+
+Scenarios within a period are **operationally independent**:
+
+- Each scenario has its own operational variables: $p(\text{t}_i, s_1)$ and $p(\text{t}_i, s_2)$ are independent
+- Scenarios cannot exchange energy, information, or resources
+- Storage states are separate: $c(\text{t}_i, s_1) \neq c(\text{t}_i, s_2)$
+- Binary states (on/off) are independent: $s(\text{t}_i, s_1)$ vs $s(\text{t}_i, s_2)$
+
+Scenarios are connected **only through the objective function** via weights:
+
+$$
+\min \quad \sum_{s \in \mathcal{S}} w_s \cdot \text{Objective}_s
+$$
+
+Where:
+- $\mathcal{S}$ is the set of scenarios
+- $w_s$ is the weight for scenario $s$
+- The optimizer balances performance across scenarios according to their weights
+
+### Period Independence
+
+Periods are **completely independent** optimization problems:
+
+- Each period has separate operational variables
+- Each period has separate investment decisions
+- No temporal coupling between periods (e.g., storage state at end of period $y$ does not affect period $y+1$)
+- Periods cannot exchange resources or information
+
+Periods are connected **only through weighted aggregation** in the objective:
+
+$$
+\min \quad \sum_{y \in \mathcal{Y}} w_y \cdot \text{Objective}_y
+$$
+
+### Shared Periodic Decisions: The Exception
+
+**Investment decisions (sizes) can be shared across all scenarios:**
+
+By default, sizes (e.g., Storage capacity, Thermal power, ...) are **scenario-independent** but **flow_rates are scenario-specific**.
+
+**Example - Flow with investment:**
+
+$$
+v_\text{invest}(y) = s_\text{invest}(y) \cdot \text{size}_\text{fixed} \quad \text{(one decision per period)}
+$$
+
+$$
+p(\text{t}_i, y, s) \leq v_\text{invest}(y) \cdot \text{rel}_\text{upper} \quad \forall s \in \mathcal{S} \quad \text{(same capacity for all scenarios)}
+$$
+
+**Interpretation:**
+- "We decide once in period $y$ how much capacity to build" (periodic decision)
+- "This capacity is then operated differently in each scenario $s$ within period $y$" (temporal decisions)
+- "Periodic effects (investment) are incurred once per period, temporal effects (operational) are weighted across scenarios"
+
+This reflects real-world investment under uncertainty: you build capacity once (periodic/investment decision), but it operates under different conditions (temporal/operational decisions per scenario).
+
+**Mathematical Flexibility:**
+
+Variables can be either scenario-independent or scenario-specific:
+
+| Variable Type | Scenario-Independent | Scenario-Specific |
+|---------------|---------------------|-------------------|
+| **Sizes** (e.g., $\text{P}$) | $\text{P}(y)$ - Single value per period | $\text{P}(y, s)$ - Different per scenario |
+| **Flow rates** (e.g., $p(\text{t}_i)$) | $p(\text{t}_i, y)$ - Same across scenarios | $p(\text{t}_i, y, s)$ - Different per scenario |
+
+**Use Cases:**
+
+*Investment problems (with InvestParameters):*
+- **Sizes shared** (default): Investment under uncertainty - build capacity that performs well across all scenarios
+- **Sizes vary**: Scenario-specific capacity planning where different investments can be made for each future
+- **Selected sizes shared**: Mix of shared critical infrastructure and scenario-specific optional/flexible capacity
+
+*Dispatch problems (fixed sizes, no investments):*
+- **Flow rates shared**: Robust dispatch - find a single operational strategy that works across all forecast scenarios (e.g., day-ahead unit commitment under demand/weather uncertainty)
+- **Flow rates vary** (default): Scenario-adaptive dispatch - optimize operations for each scenario's specific conditions (demand, weather, prices)
+
+For implementation details on controlling scenario independence, see the [`FlowSystem`][flixopt.flow_system.FlowSystem] API reference.
+
+---
+
+## Dimensional Impact on Objective Function
+
+The objective function aggregates effects across all dimensions with weights:
+
+### Time Only
+$$
+\min \quad \sum_{\text{t}_i \in \mathcal{T}} \sum_{e \in \mathcal{E}} s_{e}(\text{t}_i)
+$$
+
+### Time + Scenario
+$$
+\min \quad \sum_{s \in \mathcal{S}} w_s \cdot \left( \sum_{\text{t}_i \in \mathcal{T}} \sum_{e \in \mathcal{E}} s_{e}(\text{t}_i, s) \right)
+$$
+
+### Time + Period
+$$
+\min \quad \sum_{y \in \mathcal{Y}} w_y \cdot \left( \sum_{\text{t}_i \in \mathcal{T}} \sum_{e \in \mathcal{E}} s_{e}(\text{t}_i, y) \right)
+$$
+
+### Time + Period + Scenario (Full Multi-Dimensional)
+$$
+\min \quad \sum_{y \in \mathcal{Y}} \sum_{s \in \mathcal{S}} w_{y,s} \cdot \left( \sum_{\text{t}_i \in \mathcal{T}} \sum_{e \in \mathcal{E}} s_{e}(\text{t}_i, y, s) \right)
+$$
+
+Where:
+- $\mathcal{T}$ is the set of time steps
+- $\mathcal{E}$ is the set of effects
+- $\mathcal{S}$ is the set of scenarios
+- $\mathcal{Y}$ is the set of periods
+- $s_{e}(\cdots)$ are the effect contributions (costs, emissions, etc.)
+- $w_s, w_y, w_{y,s}$ are the dimension weights
+
+**See [Effects, Penalty & Objective](effects-penalty-objective.md) for complete formulations including:**
+- How temporal and periodic effects expand with dimensions
+- Detailed objective function for each dimensional case
+- Periodic (investment) vs temporal (operational) effect handling
+
+---
+
+## Weights
+
+Weights determine the relative importance of scenarios and periods in the objective function.
+
+**Specification:**
+
+```python
+flow_system = fx.FlowSystem(
+ timesteps=timesteps,
+ periods=periods,
+ scenarios=scenarios,
+ weights=weights # Shape depends on dimensions
+)
+```
+
+**Weight Dimensions:**
+
+| Dimensions Present | Weight Shape | Example | Meaning |
+|-------------------|--------------|---------|---------|
+| Time + Scenario | 1D array of length `n_scenarios` | `[0.3, 0.7]` | Scenario probabilities |
+| Time + Period | 1D array of length `n_periods` | `[0.5, 0.3, 0.2]` | Period importance |
+| Time + Period + Scenario | 2D array `(n_periods, n_scenarios)` | `[[0.25, 0.25], [0.25, 0.25]]` | Combined weights |
+
+**Default:** If not specified, all scenarios/periods have equal weight (normalized to sum to 1).
+
+**Normalization:** Set `normalize_weights=True` in `Calculation` to automatically normalize weights to sum to 1.
+
+---
+
+## Summary Table
+
+| Dimension | Required? | Independence | Typical Use Case |
+|-----------|-----------|--------------|------------------|
+| **time** | ✅ Yes | Variables evolve over time via constraints (e.g., storage balance) | All optimization problems |
+| **scenario** | ❌ No | Fully independent operations; shared investments within period | Uncertainty modeling, risk assessment |
+| **period** | ❌ No | Fully independent; no coupling between periods | Multi-year planning, long-term investment |
+
+**Key Principle:** All constraints and formulations operate **within** each (period, scenario) combination independently. Only the objective function couples them via weighted aggregation.
+
+---
+
+## See Also
+
+- [Effects, Penalty & Objective](effects-penalty-objective.md) - How dimensions affect the objective function
+- [InvestParameters](features/InvestParameters.md) - Investment decisions across scenarios
+- [FlowSystem API][flixopt.flow_system.FlowSystem] - Creating multi-dimensional systems
diff --git a/docs/user-guide/mathematical-notation/effects-penalty-objective.md b/docs/user-guide/mathematical-notation/effects-penalty-objective.md
new file mode 100644
index 000000000..0759ef5ee
--- /dev/null
+++ b/docs/user-guide/mathematical-notation/effects-penalty-objective.md
@@ -0,0 +1,286 @@
+# Effects, Penalty & Objective
+
+## Effects
+
+[`Effects`][flixopt.effects.Effect] are used to quantify system-wide impacts like costs, emissions, or resource consumption. These arise from **shares** contributed by **Elements** such as [Flows](elements/Flow.md), [Storage](elements/Storage.md), and other components.
+
+**Example:**
+
+[`Flows`][flixopt.elements.Flow] have an attribute `effects_per_flow_hour` that defines the effect contribution per flow-hour:
+- Costs (€/kWh)
+- Emissions (kg CO₂/kWh)
+- Primary energy consumption (kWh_primary/kWh)
+
+Effects are categorized into two domains:
+
+1. **Temporal effects** - Time-dependent contributions (e.g., operational costs, hourly emissions)
+2. **Periodic effects** - Time-independent contributions (e.g., investment costs, fixed annual fees)
+
+### Multi-Dimensional Effects
+
+**The formulations below are written with time index $\text{t}_i$ only, but automatically expand when periods and/or scenarios are present.**
+
+When the FlowSystem has additional dimensions (see [Dimensions](dimensions.md)):
+
+- **Temporal effects** are indexed by all present dimensions: $E_{e,\text{temp}}(\text{t}_i, y, s)$
+- **Periodic effects** are indexed by period only (scenario-independent within a period): $E_{e,\text{per}}(y)$
+- Effects are aggregated with dimension weights in the objective function
+
+For complete details on how dimensions affect effects and the objective, see [Dimensions](dimensions.md).
+
+---
+
+## Effect Formulation
+
+### Shares from Elements
+
+Each element $l$ contributes shares to effect $e$ in both temporal and periodic domains:
+
+**Periodic shares** (time-independent):
+$$ \label{eq:Share_periodic}
+s_{l \rightarrow e, \text{per}} = \sum_{v \in \mathcal{V}_{l, \text{per}}} v \cdot \text{a}_{v \rightarrow e}
+$$
+
+**Temporal shares** (time-dependent):
+$$ \label{eq:Share_temporal}
+s_{l \rightarrow e, \text{temp}}(\text{t}_i) = \sum_{v \in \mathcal{V}_{l,\text{temp}}} v(\text{t}_i) \cdot \text{a}_{v \rightarrow e}(\text{t}_i)
+$$
+
+Where:
+
+- $\text{t}_i$ is the time step
+- $\mathcal{V}_l$ is the set of all optimization variables of element $l$
+- $\mathcal{V}_{l, \text{per}}$ is the subset of periodic (investment-related) variables
+- $\mathcal{V}_{l, \text{temp}}$ is the subset of temporal (operational) variables
+- $v$ is an optimization variable
+- $v(\text{t}_i)$ is the variable value at timestep $\text{t}_i$
+- $\text{a}_{v \rightarrow e}$ is the effect factor (e.g., €/kW for investment, €/kWh for operation)
+- $s_{l \rightarrow e, \text{per}}$ is the periodic share of element $l$ to effect $e$
+- $s_{l \rightarrow e, \text{temp}}(\text{t}_i)$ is the temporal share of element $l$ to effect $e$
+
+**Examples:**
+- **Periodic share**: Investment cost = $\text{size} \cdot \text{specific\_cost}$ (€/kW)
+- **Temporal share**: Operational cost = $\text{flow\_rate}(\text{t}_i) \cdot \text{price}(\text{t}_i)$ (€/kWh)
+
+---
+
+### Cross-Effect Contributions
+
+Effects can contribute shares to other effects, enabling relationships like carbon pricing or resource accounting.
+
+An effect $x$ can contribute to another effect $e \in \mathcal{E}\backslash x$ via conversion factors:
+
+**Example:** CO₂ emissions (kg) → Monetary costs (€)
+- Effect $x$: "CO₂ emissions" (unit: kg)
+- Effect $e$: "costs" (unit: €)
+- Factor $\text{r}_{x \rightarrow e}$: CO₂ price (€/kg)
+
+**Note:** Circular references must be avoided.
+
+### Total Effect Calculation
+
+**Periodic effects** aggregate element shares and cross-effect contributions:
+
+$$ \label{eq:Effect_periodic}
+E_{e, \text{per}} =
+\sum_{l \in \mathcal{L}} s_{l \rightarrow e,\text{per}} +
+\sum_{x \in \mathcal{E}\backslash e} E_{x, \text{per}} \cdot \text{r}_{x \rightarrow e,\text{per}}
+$$
+
+**Temporal effects** at each timestep:
+
+$$ \label{eq:Effect_temporal}
+E_{e, \text{temp}}(\text{t}_{i}) =
+\sum_{l \in \mathcal{L}} s_{l \rightarrow e, \text{temp}}(\text{t}_i) +
+\sum_{x \in \mathcal{E}\backslash e} E_{x, \text{temp}}(\text{t}_i) \cdot \text{r}_{x \rightarrow {e},\text{temp}}(\text{t}_i)
+$$
+
+**Total temporal effects** (sum over all timesteps):
+
+$$\label{eq:Effect_temporal_total}
+E_{e,\text{temp},\text{tot}} = \sum_{i=1}^n E_{e,\text{temp}}(\text{t}_{i})
+$$
+
+**Total effect** (combining both domains):
+
+$$ \label{eq:Effect_Total}
+E_{e} = E_{e,\text{per}} + E_{e,\text{temp},\text{tot}}
+$$
+
+Where:
+
+- $\mathcal{L}$ is the set of all elements in the FlowSystem
+- $\mathcal{E}$ is the set of all effects
+- $\text{r}_{x \rightarrow e, \text{per}}$ is the periodic conversion factor from effect $x$ to effect $e$
+- $\text{r}_{x \rightarrow e, \text{temp}}(\text{t}_i)$ is the temporal conversion factor
+
+---
+
+### Constraining Effects
+
+Effects can be bounded to enforce limits on costs, emissions, or other impacts:
+
+**Total bounds** (apply to $E_{e,\text{per}}$, $E_{e,\text{temp},\text{tot}}$, or $E_e$):
+
+$$ \label{eq:Bounds_Total}
+E^\text{L} \leq E \leq E^\text{U}
+$$
+
+**Temporal bounds per timestep:**
+
+$$ \label{eq:Bounds_Timestep}
+E_{e,\text{temp}}^\text{L}(\text{t}_i) \leq E_{e,\text{temp}}(\text{t}_i) \leq E_{e,\text{temp}}^\text{U}(\text{t}_i)
+$$
+
+**Implementation:** See [`Effect`][flixopt.effects.Effect] parameters:
+- `minimum_temporal`, `maximum_temporal` - Total temporal bounds
+- `minimum_per_hour`, `maximum_per_hour` - Hourly temporal bounds
+- `minimum_periodic`, `maximum_periodic` - Periodic bounds
+- `minimum_total`, `maximum_total` - Combined total bounds
+
+---
+
+## Penalty
+
+In addition to user-defined [Effects](#effects), every FlixOpt model includes a **Penalty** term $\Phi$ to:
+- Prevent infeasible problems
+- Simplify troubleshooting by allowing constraint violations with high cost
+
+Penalty shares originate from elements, similar to effect shares:
+
+$$ \label{eq:Penalty}
+\Phi = \sum_{l \in \mathcal{L}} \left( s_{l \rightarrow \Phi} +\sum_{\text{t}_i \in \mathcal{T}} s_{l \rightarrow \Phi}(\text{t}_{i}) \right)
+$$
+
+Where:
+
+- $\mathcal{L}$ is the set of all elements
+- $\mathcal{T}$ is the set of all timesteps
+- $s_{l \rightarrow \Phi}$ is the penalty share from element $l$
+
+**Current usage:** Penalties primarily occur in [Buses](elements/Bus.md) via the `excess_penalty_per_flow_hour` parameter, which allows nodal imbalances at a high cost.
+
+---
+
+## Objective Function
+
+The optimization objective minimizes the chosen effect plus any penalties:
+
+$$ \label{eq:Objective}
+\min \left( E_{\Omega} + \Phi \right)
+$$
+
+Where:
+
+- $E_{\Omega}$ is the chosen **objective effect** (see $\eqref{eq:Effect_Total}$)
+- $\Phi$ is the [penalty](#penalty) term
+
+One effect must be designated as the objective via `is_objective=True`.
+
+### Multi-Criteria Optimization
+
+This formulation supports multiple optimization approaches:
+
+**1. Weighted Sum Method**
+- The objective effect can incorporate other effects via cross-effect factors
+- Example: Minimize costs while including carbon pricing: $\text{CO}_2 \rightarrow \text{costs}$
+
+**2. ε-Constraint Method**
+- Optimize one effect while constraining others
+- Example: Minimize costs subject to $\text{CO}_2 \leq 1000$ kg
+
+---
+
+## Objective with Multiple Dimensions
+
+When the FlowSystem includes **periods** and/or **scenarios** (see [Dimensions](dimensions.md)), the objective aggregates effects across all dimensions using weights.
+
+### Time Only (Base Case)
+
+$$
+\min \quad E_{\Omega} + \Phi = \sum_{\text{t}_i \in \mathcal{T}} E_{\Omega,\text{temp}}(\text{t}_i) + E_{\Omega,\text{per}} + \Phi
+$$
+
+Where:
+- Temporal effects sum over time: $\sum_{\text{t}_i} E_{\Omega,\text{temp}}(\text{t}_i)$
+- Periodic effects are constant: $E_{\Omega,\text{per}}$
+- Penalty sums over time: $\Phi = \sum_{\text{t}_i} \Phi(\text{t}_i)$
+
+---
+
+### Time + Scenario
+
+$$
+\min \quad \sum_{s \in \mathcal{S}} w_s \cdot \left( E_{\Omega}(s) + \Phi(s) \right)
+$$
+
+Where:
+- $\mathcal{S}$ is the set of scenarios
+- $w_s$ is the weight for scenario $s$ (typically scenario probability)
+- Periodic effects are **shared across scenarios**: $E_{\Omega,\text{per}}$ (same for all $s$)
+- Temporal effects are **scenario-specific**: $E_{\Omega,\text{temp}}(s) = \sum_{\text{t}_i} E_{\Omega,\text{temp}}(\text{t}_i, s)$
+- Penalties are **scenario-specific**: $\Phi(s) = \sum_{\text{t}_i} \Phi(\text{t}_i, s)$
+
+**Interpretation:**
+- Investment decisions (periodic) made once, used across all scenarios
+- Operations (temporal) differ by scenario
+- Objective balances expected value across scenarios
+
+---
+
+### Time + Period
+
+$$
+\min \quad \sum_{y \in \mathcal{Y}} w_y \cdot \left( E_{\Omega}(y) + \Phi(y) \right)
+$$
+
+Where:
+- $\mathcal{Y}$ is the set of periods (e.g., years)
+- $w_y$ is the weight for period $y$ (typically annual discount factor)
+- Each period $y$ has **independent** periodic and temporal effects
+- Each period $y$ has **independent** investment and operational decisions
+
+---
+
+### Time + Period + Scenario (Full Multi-Dimensional)
+
+$$
+\min \quad \sum_{y \in \mathcal{Y}} \left[ w_y \cdot E_{\Omega,\text{per}}(y) + \sum_{s \in \mathcal{S}} w_{y,s} \cdot \left( E_{\Omega,\text{temp}}(y,s) + \Phi(y,s) \right) \right]
+$$
+
+Where:
+- $\mathcal{S}$ is the set of scenarios
+- $\mathcal{Y}$ is the set of periods
+- $w_y$ is the period weight (for periodic effects)
+- $w_{y,s}$ is the combined period-scenario weight (for temporal effects)
+- **Periodic effects** $E_{\Omega,\text{per}}(y)$ are period-specific but **scenario-independent**
+- **Temporal effects** $E_{\Omega,\text{temp}}(y,s) = \sum_{\text{t}_i} E_{\Omega,\text{temp}}(\text{t}_i, y, s)$ are **fully indexed**
+- **Penalties** $\Phi(y,s)$ are **fully indexed**
+
+**Key Principle:**
+- Scenarios and periods are **operationally independent** (no energy/resource exchange)
+- Coupled **only through the weighted objective function**
+- **Periodic effects within a period are shared across all scenarios** (investment made once per period)
+- **Temporal effects are independent per scenario** (different operations under different conditions)
+
+---
+
+## Summary
+
+| Concept | Formulation | Time Dependency | Dimension Indexing |
+|---------|-------------|-----------------|-------------------|
+| **Temporal share** | $s_{l \rightarrow e, \text{temp}}(\text{t}_i)$ | Time-dependent | $(t, y, s)$ when present |
+| **Periodic share** | $s_{l \rightarrow e, \text{per}}$ | Time-independent | $(y)$ when periods present |
+| **Total temporal effect** | $E_{e,\text{temp},\text{tot}} = \sum_{\text{t}_i} E_{e,\text{temp}}(\text{t}_i)$ | Sum over time | Depends on dimensions |
+| **Total periodic effect** | $E_{e,\text{per}}$ | Constant | $(y)$ when periods present |
+| **Total effect** | $E_e = E_{e,\text{per}} + E_{e,\text{temp},\text{tot}}$ | Combined | Depends on dimensions |
+| **Objective** | $\min(E_{\Omega} + \Phi)$ | With weights when multi-dimensional | See formulations above |
+
+---
+
+## See Also
+
+- [Dimensions](dimensions.md) - Complete explanation of multi-dimensional modeling
+- [Flow](elements/Flow.md) - Temporal effect contributions via `effects_per_flow_hour`
+- [InvestParameters](features/InvestParameters.md) - Periodic effect contributions via investment
+- [Effect API][flixopt.effects.Effect] - Implementation details and parameters
diff --git a/docs/user-guide/Mathematical Notation/Bus.md b/docs/user-guide/mathematical-notation/elements/Bus.md
similarity index 78%
rename from docs/user-guide/Mathematical Notation/Bus.md
rename to docs/user-guide/mathematical-notation/elements/Bus.md
index 6ba17eede..bfe57d234 100644
--- a/docs/user-guide/Mathematical Notation/Bus.md
+++ b/docs/user-guide/mathematical-notation/elements/Bus.md
@@ -31,3 +31,19 @@ With:
- $\text{t}_i$ being the time step
- $s_{b \rightarrow \Phi}(\text{t}_i)$ being the penalty term
- $\text a_{b \rightarrow \Phi}(\text{t}_i)$ being the penalty coefficient (`excess_penalty_per_flow_hour`)
+
+---
+
+## Implementation
+
+**Python Class:** [`Bus`][flixopt.elements.Bus]
+
+See the API documentation for implementation details and usage examples.
+
+---
+
+## See Also
+
+- [Flow](../elements/Flow.md) - Definition of flow rates in the balance
+- [Effects, Penalty & Objective](../effects-penalty-objective.md) - How penalties are included in the objective function
+- [Modeling Patterns](../modeling-patterns/index.md) - Mathematical building blocks
diff --git a/docs/user-guide/mathematical-notation/elements/Flow.md b/docs/user-guide/mathematical-notation/elements/Flow.md
new file mode 100644
index 000000000..5914ba911
--- /dev/null
+++ b/docs/user-guide/mathematical-notation/elements/Flow.md
@@ -0,0 +1,64 @@
+# Flow
+
+The flow_rate is the main optimization variable of the Flow. It's limited by the size of the Flow and relative bounds \eqref{eq:flow_rate}.
+
+$$ \label{eq:flow_rate}
+ \text P \cdot \text p^{\text{L}}_{\text{rel}}(\text{t}_{i})
+ \leq p(\text{t}_{i}) \leq
+ \text P \cdot \text p^{\text{U}}_{\text{rel}}(\text{t}_{i})
+$$
+
+With:
+
+- $\text P$ being the size of the Flow
+- $p(\text{t}_{i})$ being the flow-rate at time $\text{t}_{i}$
+- $\text p^{\text{L}}_{\text{rel}}(\text{t}_{i})$ being the relative lower bound (typically 0)
+- $\text p^{\text{U}}_{\text{rel}}(\text{t}_{i})$ being the relative upper bound (typically 1)
+
+With $\text p^{\text{L}}_{\text{rel}}(\text{t}_{i}) = 0$ and $\text p^{\text{U}}_{\text{rel}}(\text{t}_{i}) = 1$,
+equation \eqref{eq:flow_rate} simplifies to
+
+$$
+ 0 \leq p(\text{t}_{i}) \leq \text P
+$$
+
+
+This mathematical formulation can be extended by using [OnOffParameters](../features/OnOffParameters.md)
+to define the on/off state of the Flow, or by using [InvestParameters](../features/InvestParameters.md)
+to change the size of the Flow from a constant to an optimization variable.
+
+---
+
+## Mathematical Patterns Used
+
+Flow formulation uses the following modeling patterns:
+
+- **[Scaled Bounds](../modeling-patterns/bounds-and-states.md#scaled-bounds)** - Basic flow rate bounds (equation $\eqref{eq:flow_rate}$)
+- **[Scaled Bounds with State](../modeling-patterns/bounds-and-states.md#scaled-bounds-with-state)** - When combined with [OnOffParameters](../features/OnOffParameters.md)
+- **[Bounds with State](../modeling-patterns/bounds-and-states.md#bounds-with-state)** - Investment decisions with [InvestParameters](../features/InvestParameters.md)
+
+---
+
+## Implementation
+
+**Python Class:** [`Flow`][flixopt.elements.Flow]
+
+**Key Parameters:**
+- `size`: Flow size $\text{P}$ (can be fixed or variable with InvestParameters)
+- `relative_minimum`, `relative_maximum`: Relative bounds $\text{p}^{\text{L}}_{\text{rel}}, \text{p}^{\text{U}}_{\text{rel}}$
+- `effects_per_flow_hour`: Operational effects (costs, emissions, etc.)
+- `invest_parameters`: Optional investment modeling (see [InvestParameters](../features/InvestParameters.md))
+- `on_off_parameters`: Optional on/off operation (see [OnOffParameters](../features/OnOffParameters.md))
+
+See the [`Flow`][flixopt.elements.Flow] API documentation for complete parameter list and usage examples.
+
+---
+
+## See Also
+
+- [OnOffParameters](../features/OnOffParameters.md) - Binary on/off operation
+- [InvestParameters](../features/InvestParameters.md) - Variable flow sizing
+- [Bus](../elements/Bus.md) - Flow balance constraints
+- [LinearConverter](../elements/LinearConverter.md) - Flow ratio constraints
+- [Storage](../elements/Storage.md) - Flow integration over time
+- [Modeling Patterns](../modeling-patterns/index.md) - Mathematical building blocks
diff --git a/docs/user-guide/mathematical-notation/elements/LinearConverter.md b/docs/user-guide/mathematical-notation/elements/LinearConverter.md
new file mode 100644
index 000000000..b007aa7f5
--- /dev/null
+++ b/docs/user-guide/mathematical-notation/elements/LinearConverter.md
@@ -0,0 +1,50 @@
+[`LinearConverters`][flixopt.components.LinearConverter] define a ratio between incoming and outgoing [Flows](../elements/Flow.md).
+
+$$ \label{eq:Linear-Transformer-Ratio}
+ \sum_{f_{\text{in}} \in \mathcal F_{in}} \text a_{f_{\text{in}}}(\text{t}_i) \cdot p_{f_\text{in}}(\text{t}_i) = \sum_{f_{\text{out}} \in \mathcal F_{out}} \text b_{f_\text{out}}(\text{t}_i) \cdot p_{f_\text{out}}(\text{t}_i)
+$$
+
+With:
+
+- $\mathcal F_{in}$ and $\mathcal F_{out}$ being the set of all incoming and outgoing flows
+- $p_{f_\text{in}}(\text{t}_i)$ and $p_{f_\text{out}}(\text{t}_i)$ being the flow-rate at time $\text{t}_i$ for flow $f_\text{in}$ and $f_\text{out}$, respectively
+- $\text a_{f_\text{in}}(\text{t}_i)$ and $\text b_{f_\text{out}}(\text{t}_i)$ being the ratio of the flow-rate at time $\text{t}_i$ for flow $f_\text{in}$ and $f_\text{out}$, respectively
+
+With one incoming **Flow** and one outgoing **Flow**, this can be simplified to:
+
+$$ \label{eq:Linear-Transformer-Ratio-simple}
+ \text a(\text{t}_i) \cdot p_{f_\text{in}}(\text{t}_i) = p_{f_\text{out}}(\text{t}_i)
+$$
+
+where $\text a$ can be interpreted as the conversion efficiency of the **LinearConverter**.
+
+#### Piecewise Conversion factors
+The conversion efficiency can be defined as a piecewise linear approximation. See [Piecewise](../features/Piecewise.md) for more details.
+
+---
+
+## Implementation
+
+**Python Class:** [`LinearConverter`][flixopt.components.LinearConverter]
+
+**Specialized Linear Converters:**
+
+FlixOpt provides specialized linear converter classes for common applications:
+
+- **[`HeatPump`][flixopt.linear_converters.HeatPump]** - Coefficient of Performance (COP) based conversion
+- **[`Power2Heat`][flixopt.linear_converters.Power2Heat]** - Electric heating with efficiency ≤ 1
+- **[`CHP`][flixopt.linear_converters.CHP]** - Combined heat and power generation
+- **[`Boiler`][flixopt.linear_converters.Boiler]** - Fuel to heat conversion
+
+These classes handle the mathematical formulation automatically based on physical relationships.
+
+See the API documentation for implementation details and usage examples.
+
+---
+
+## See Also
+
+- [Flow](../elements/Flow.md) - Definition of flow rates
+- [Piecewise](../features/Piecewise.md) - Non-linear conversion efficiency modeling
+- [InvestParameters](../features/InvestParameters.md) - Variable converter sizing
+- [Modeling Patterns](../modeling-patterns/index.md) - Mathematical building blocks
diff --git a/docs/user-guide/Mathematical Notation/Storage.md b/docs/user-guide/mathematical-notation/elements/Storage.md
similarity index 52%
rename from docs/user-guide/Mathematical Notation/Storage.md
rename to docs/user-guide/mathematical-notation/elements/Storage.md
index 63f01d198..cd7046592 100644
--- a/docs/user-guide/Mathematical Notation/Storage.md
+++ b/docs/user-guide/mathematical-notation/elements/Storage.md
@@ -1,5 +1,5 @@
# Storages
-**Storages** have one incoming and one outgoing **[Flow](Flow.md)** with a charging and discharging efficiency.
+**Storages** have one incoming and one outgoing **[Flow](../elements/Flow.md)** with a charging and discharging efficiency.
A storage has a state of charge $c(\text{t}_i)$ which is limited by its `size` $\text C$ and relative bounds $\eqref{eq:Storage_Bounds}$.
$$ \label{eq:Storage_Bounds}
@@ -25,9 +25,9 @@ $ \dot{ \text c}_\text{rel, loss}(\text{t}_i)$ expresses the "loss fraction per
$$
\begin{align*}
- c(\text{t}_{i+1}) &= c(\text{t}_{i}) \cdot (1-\dot{\text{c}}_\text{rel,loss}(\text{t}_i) \cdot \Delta \text{t}_{i}) \\
+ c(\text{t}_{i+1}) &= c(\text{t}_{i}) \cdot (1-\dot{\text{c}}_\text{rel,loss}(\text{t}_i))^{\Delta \text{t}_{i}} \\
&\quad + p_{f_\text{in}}(\text{t}_i) \cdot \Delta \text{t}_i \cdot \eta_\text{in}(\text{t}_i) \\
- &\quad - \frac{p_{f_\text{out}}(\text{t}_i) \cdot \Delta \text{t}_i}{\eta_\text{out}(\text{t}_i)}
+ &\quad - p_{f_\text{out}}(\text{t}_i) \cdot \Delta \text{t}_i \cdot \eta_\text{out}(\text{t}_i)
\tag{3}
\end{align*}
$$
@@ -42,3 +42,38 @@ Where:
- $\eta_\text{in}(\text{t}_i)$ is the charging efficiency at time $\text{t}_i$
- $p_{f_\text{out}}(\text{t}_i)$ is the output flow rate at time $\text{t}_i$
- $\eta_\text{out}(\text{t}_i)$ is the discharging efficiency at time $\text{t}_i$
+
+---
+
+## Mathematical Patterns Used
+
+Storage formulation uses the following modeling patterns:
+
+- **[Basic Bounds](../modeling-patterns/bounds-and-states.md#basic-bounds)** - For charge state bounds (equation $\eqref{eq:Storage_Bounds}$)
+- **[Scaled Bounds](../modeling-patterns/bounds-and-states.md#scaled-bounds)** - For flow rate bounds relative to storage size
+
+When combined with investment parameters, storage can use:
+- **[Bounds with State](../modeling-patterns/bounds-and-states.md#bounds-with-state)** - Investment decisions (see [InvestParameters](../features/InvestParameters.md))
+
+---
+
+## Implementation
+
+**Python Class:** [`Storage`][flixopt.components.Storage]
+
+**Key Parameters:**
+- `capacity_in_flow_hours`: Storage capacity $\text{C}$
+- `relative_loss_per_hour`: Self-discharge rate $\dot{\text{c}}_\text{rel,loss}$
+- `initial_charge_state`: Initial charge $c(\text{t}_0)$
+- `minimal_final_charge_state`, `maximal_final_charge_state`: Final charge bounds $c(\text{t}_\text{end})$ (optional)
+- `eta_charge`, `eta_discharge`: Charging/discharging efficiencies $\eta_\text{in}, \eta_\text{out}$
+
+See the [`Storage`][flixopt.components.Storage] API documentation for complete parameter list and usage examples.
+
+---
+
+## See Also
+
+- [Flow](../elements/Flow.md) - Input and output flow definitions
+- [InvestParameters](../features/InvestParameters.md) - Variable storage sizing
+- [Modeling Patterns](../modeling-patterns/index.md) - Mathematical building blocks
diff --git a/docs/user-guide/mathematical-notation/features/InvestParameters.md b/docs/user-guide/mathematical-notation/features/InvestParameters.md
new file mode 100644
index 000000000..14fe02c79
--- /dev/null
+++ b/docs/user-guide/mathematical-notation/features/InvestParameters.md
@@ -0,0 +1,302 @@
+# InvestParameters
+
+[`InvestParameters`][flixopt.interface.InvestParameters] model investment decisions in optimization problems, enabling both binary (invest/don't invest) and continuous sizing choices with comprehensive cost modeling.
+
+## Investment Decision Types
+
+FlixOpt supports two main types of investment decisions:
+
+### Binary Investment
+
+Fixed-size investment creating a yes/no decision (e.g., install a 100 kW generator):
+
+$$\label{eq:invest_binary}
+v_\text{invest} = s_\text{invest} \cdot \text{size}_\text{fixed}
+$$
+
+With:
+- $v_\text{invest}$ being the resulting investment size
+- $s_\text{invest} \in \{0, 1\}$ being the binary investment decision
+- $\text{size}_\text{fixed}$ being the predefined component size
+
+**Behavior:**
+- $s_\text{invest} = 0$: no investment ($v_\text{invest} = 0$)
+- $s_\text{invest} = 1$: invest at fixed size ($v_\text{invest} = \text{size}_\text{fixed}$)
+
+---
+
+### Continuous Sizing
+
+Variable-size investment with bounds (e.g., battery capacity from 10-1000 kWh):
+
+$$\label{eq:invest_continuous}
+s_\text{invest} \cdot \text{size}_\text{min} \leq v_\text{invest} \leq s_\text{invest} \cdot \text{size}_\text{max}
+$$
+
+With:
+- $v_\text{invest}$ being the investment size variable (continuous)
+- $s_\text{invest} \in \{0, 1\}$ being the binary investment decision
+- $\text{size}_\text{min}$ being the minimum investment size (if investing)
+- $\text{size}_\text{max}$ being the maximum investment size
+
+**Behavior:**
+- $s_\text{invest} = 0$: no investment ($v_\text{invest} = 0$)
+- $s_\text{invest} = 1$: invest with size in $[\text{size}_\text{min}, \text{size}_\text{max}]$
+
+This uses the **bounds with state** pattern described in [Bounds and States](../modeling-patterns/bounds-and-states.md#bounds-with-state).
+
+---
+
+### Optional vs. Mandatory Investment
+
+The `mandatory` parameter controls whether investment is required:
+
+**Optional Investment** (`mandatory=False`, default):
+$$\label{eq:invest_optional}
+s_\text{invest} \in \{0, 1\}
+$$
+
+The optimization can freely choose to invest or not.
+
+**Mandatory Investment** (`mandatory=True`):
+$$\label{eq:invest_mandatory}
+s_\text{invest} = 1
+$$
+
+The investment must occur (useful for mandatory upgrades or replacements).
+
+---
+
+## Effect Modeling
+
+Investment effects (costs, emissions, etc.) are modeled using three components:
+
+### Fixed Effects
+
+One-time effects incurred if investment is made, independent of size:
+
+$$\label{eq:invest_fixed_effects}
+E_{e,\text{fix}} = s_\text{invest} \cdot \text{fix}_e
+$$
+
+With:
+- $E_{e,\text{fix}}$ being the fixed contribution to effect $e$
+- $\text{fix}_e$ being the fixed effect value (e.g., fixed installation cost)
+
+**Examples:**
+- Fixed installation costs (permits, grid connection)
+- One-time environmental impacts (land preparation)
+- Fixed labor or administrative costs
+
+---
+
+### Specific Effects
+
+Effects proportional to investment size (per-unit costs):
+
+$$\label{eq:invest_specific_effects}
+E_{e,\text{spec}} = v_\text{invest} \cdot \text{spec}_e
+$$
+
+With:
+- $E_{e,\text{spec}}$ being the size-dependent contribution to effect $e$
+- $\text{spec}_e$ being the specific effect value per unit size (e.g., €/kW)
+
+**Examples:**
+- Equipment costs (€/kW)
+- Material requirements (kg steel/kW)
+- Recurring costs (€/kW/year maintenance)
+
+---
+
+### Piecewise Effects
+
+Non-linear effect relationships using piecewise linear approximations:
+
+$$\label{eq:invest_piecewise_effects}
+E_{e,\text{pw}} = \sum_{k=1}^{K} \lambda_k \cdot r_{e,k}
+$$
+
+Subject to:
+$$
+v_\text{invest} = \sum_{k=1}^{K} \lambda_k \cdot v_k
+$$
+
+With:
+- $E_{e,\text{pw}}$ being the piecewise contribution to effect $e$
+- $\lambda_k$ being the piecewise lambda variables (see [Piecewise](../features/Piecewise.md))
+- $r_{e,k}$ being the effect rate at piece $k$
+- $v_k$ being the size points defining the pieces
+
+**Use cases:**
+- Economies of scale (bulk discounts)
+- Technology learning curves
+- Threshold effects (capacity tiers with different costs)
+
+See [Piecewise](../features/Piecewise.md) for detailed mathematical formulation.
+
+---
+
+### Retirement Effects
+
+Effects incurred if investment is NOT made (when retiring/not replacing existing equipment):
+
+$$\label{eq:invest_retirement_effects}
+E_{e,\text{retirement}} = (1 - s_\text{invest}) \cdot \text{retirement}_e
+$$
+
+With:
+- $E_{e,\text{retirement}}$ being the retirement contribution to effect $e$
+- $\text{retirement}_e$ being the retirement effect value
+
+**Behavior:**
+- $s_\text{invest} = 0$: retirement effects are incurred
+- $s_\text{invest} = 1$: no retirement effects
+
+**Examples:**
+- Demolition or disposal costs
+- Decommissioning expenses
+- Contractual penalties for not investing
+- Opportunity costs or lost revenues
+
+---
+
+### Total Investment Effects
+
+The total contribution to effect $e$ from an investment is:
+
+$$\label{eq:invest_total_effects}
+E_{e,\text{invest}} = E_{e,\text{fix}} + E_{e,\text{spec}} + E_{e,\text{pw}} + E_{e,\text{retirement}}
+$$
+
+Effects integrate into the overall system effects as described in [Effects, Penalty & Objective](../effects-penalty-objective.md).
+
+---
+
+## Integration with Components
+
+Investment parameters modify component sizing:
+
+### Without Investment
+Component size is a fixed parameter:
+$$
+\text{size} = \text{size}_\text{nominal}
+$$
+
+### With Investment
+Component size becomes a variable:
+$$
+\text{size} = v_\text{invest}
+$$
+
+This size variable then appears in component constraints. For example, flow rate bounds become:
+
+$$
+v_\text{invest} \cdot \text{rel}_\text{lower} \leq p(t) \leq v_\text{invest} \cdot \text{rel}_\text{upper}
+$$
+
+Using the **scaled bounds** pattern from [Bounds and States](../modeling-patterns/bounds-and-states.md#scaled-bounds).
+
+---
+
+## Cost Annualization
+
+**Important:** All investment cost values must be properly weighted to match the optimization model's time horizon.
+
+For long-term investments, costs should be annualized:
+
+$$\label{eq:annualization}
+\text{cost}_\text{annual} = \frac{\text{cost}_\text{capital} \cdot r}{1 - (1 + r)^{-n}}
+$$
+
+With:
+- $\text{cost}_\text{capital}$ being the upfront investment cost
+- $r$ being the discount rate
+- $n$ being the equipment lifetime in years
+
+**Example:** €1,000,000 equipment with 20-year life and 5% discount rate
+$$
+\text{cost}_\text{annual} = \frac{1{,}000{,}000 \cdot 0.05}{1 - (1.05)^{-20}} \approx €80{,}243/\text{year}
+$$
+
+---
+
+## Implementation
+
+**Python Class:** [`InvestParameters`][flixopt.interface.InvestParameters]
+
+**Key Parameters:**
+- `fixed_size`: For binary investments (mutually exclusive with continuous sizing)
+- `minimum_size`, `maximum_size`: For continuous sizing
+- `mandatory`: Whether investment is required (default: `False`)
+- `effects_of_investment`: Fixed effects incurred when investing (replaces deprecated `fix_effects`)
+- `effects_of_investment_per_size`: Per-unit effects proportional to size (replaces deprecated `specific_effects`)
+- `piecewise_effects_of_investment`: Non-linear effect modeling (replaces deprecated `piecewise_effects`)
+- `effects_of_retirement`: Effects for not investing (replaces deprecated `divest_effects`)
+
+See the [`InvestParameters`][flixopt.interface.InvestParameters] API documentation for complete parameter list and usage examples.
+
+**Used in:**
+- [`Flow`][flixopt.elements.Flow] - Flexible capacity decisions
+- [`Storage`][flixopt.components.Storage] - Storage sizing optimization
+- [`LinearConverter`][flixopt.components.LinearConverter] - Converter capacity planning
+- All components supporting investment decisions
+
+---
+
+## Examples
+
+### Binary Investment (Solar Panels)
+```python
+solar_investment = InvestParameters(
+ fixed_size=100, # 100 kW system
+ mandatory=False, # Optional investment (default)
+ effects_of_investment={'cost': 25000}, # Installation costs
+ effects_of_investment_per_size={'cost': 1200}, # €1200/kW
+)
+```
+
+### Continuous Sizing (Battery)
+```python
+battery_investment = InvestParameters(
+ minimum_size=10, # kWh
+ maximum_size=1000,
+ mandatory=False, # Optional investment (default)
+ effects_of_investment={'cost': 5000}, # Grid connection
+ effects_of_investment_per_size={'cost': 600}, # €600/kWh
+)
+```
+
+### With Retirement Costs (Replacement)
+```python
+boiler_replacement = InvestParameters(
+ minimum_size=50, # kW
+ maximum_size=200,
+ mandatory=False, # Optional investment (default)
+ effects_of_investment={'cost': 15000},
+ effects_of_investment_per_size={'cost': 400},
+ effects_of_retirement={'cost': 8000}, # Demolition if not replaced
+)
+```
+
+### Economies of Scale (Piecewise)
+```python
+battery_investment = InvestParameters(
+ minimum_size=10,
+ maximum_size=1000,
+ piecewise_effects_of_investment=PiecewiseEffects(
+ piecewise_origin=Piecewise([
+ Piece(0, 100), # Small
+ Piece(100, 500), # Medium
+ Piece(500, 1000), # Large
+ ]),
+ piecewise_shares={
+ 'cost': Piecewise([
+ Piece(800, 750), # €800-750/kWh
+ Piece(750, 600), # €750-600/kWh
+ Piece(600, 500), # €600-500/kWh (bulk discount)
+ ])
+ },
+ ),
+)
+```
diff --git a/docs/user-guide/mathematical-notation/features/OnOffParameters.md b/docs/user-guide/mathematical-notation/features/OnOffParameters.md
new file mode 100644
index 000000000..4ec6a9726
--- /dev/null
+++ b/docs/user-guide/mathematical-notation/features/OnOffParameters.md
@@ -0,0 +1,307 @@
+# OnOffParameters
+
+[`OnOffParameters`][flixopt.interface.OnOffParameters] model equipment that operates in discrete on/off states rather than continuous operation. This captures realistic operational constraints including startup costs, minimum run times, cycling limitations, and maintenance scheduling.
+
+## Binary State Variable
+
+Equipment operation is modeled using a binary state variable:
+
+$$\label{eq:onoff_state}
+s(t) \in \{0, 1\} \quad \forall t
+$$
+
+With:
+- $s(t) = 1$: equipment is operating (on state)
+- $s(t) = 0$: equipment is shutdown (off state)
+
+This state variable controls the equipment's operational constraints and modifies flow bounds using the **bounds with state** pattern from [Bounds and States](../modeling-patterns/bounds-and-states.md#bounds-with-state).
+
+---
+
+## State Transitions and Switching
+
+State transitions are tracked using switch variables (see [State Transitions](../modeling-patterns/state-transitions.md#binary-state-transitions)):
+
+$$\label{eq:onoff_transitions}
+s^\text{on}(t) - s^\text{off}(t) = s(t) - s(t-1) \quad \forall t > 0
+$$
+
+$$\label{eq:onoff_switch_exclusivity}
+s^\text{on}(t) + s^\text{off}(t) \leq 1 \quad \forall t
+$$
+
+With:
+- $s^\text{on}(t) \in \{0, 1\}$: equals 1 when switching from off to on (startup)
+- $s^\text{off}(t) \in \{0, 1\}$: equals 1 when switching from on to off (shutdown)
+
+**Behavior:**
+- Off → On: $s^\text{on}(t) = 1, s^\text{off}(t) = 0$
+- On → Off: $s^\text{on}(t) = 0, s^\text{off}(t) = 1$
+- No change: $s^\text{on}(t) = 0, s^\text{off}(t) = 0$
+
+---
+
+## Effects and Costs
+
+### Switching Effects
+
+Effects incurred when equipment starts up:
+
+$$\label{eq:onoff_switch_effects}
+E_{e,\text{switch}} = \sum_{t} s^\text{on}(t) \cdot \text{effect}_{e,\text{switch}}
+$$
+
+With:
+- $\text{effect}_{e,\text{switch}}$ being the effect value per startup event
+
+**Examples:**
+- Startup fuel consumption
+- Wear and tear costs
+- Labor costs for startup procedures
+- Inrush power demands
+
+---
+
+### Running Effects
+
+Effects incurred while equipment is operating:
+
+$$\label{eq:onoff_running_effects}
+E_{e,\text{run}} = \sum_{t} s(t) \cdot \Delta t \cdot \text{effect}_{e,\text{run}}
+$$
+
+With:
+- $\text{effect}_{e,\text{run}}$ being the effect rate per operating hour
+- $\Delta t$ being the time step duration
+
+**Examples:**
+- Fixed operating and maintenance costs
+- Auxiliary power consumption
+- Consumable materials
+- Emissions while running
+
+---
+
+## Operating Hour Constraints
+
+### Total Operating Hours
+
+Bounds on total operating time across the planning horizon:
+
+$$\label{eq:onoff_total_hours}
+h_\text{min} \leq \sum_{t} s(t) \cdot \Delta t \leq h_\text{max}
+$$
+
+With:
+- $h_\text{min}$ being the minimum total operating hours
+- $h_\text{max}$ being the maximum total operating hours
+
+**Use cases:**
+- Minimum runtime requirements (contracts, maintenance)
+- Maximum runtime limits (fuel availability, permits, equipment life)
+
+---
+
+### Consecutive Operating Hours
+
+**Minimum Consecutive On-Time:**
+
+Enforces minimum runtime once started using duration tracking (see [Duration Tracking](../modeling-patterns/duration-tracking.md#minimum-duration-constraints)):
+
+$$\label{eq:onoff_min_on_duration}
+d^\text{on}(t) \geq (s(t-1) - s(t)) \cdot h^\text{on}_\text{min} \quad \forall t > 0
+$$
+
+With:
+- $d^\text{on}(t)$ being the consecutive on-time duration at time $t$
+- $h^\text{on}_\text{min}$ being the minimum required on-time
+
+**Behavior:**
+- When shutting down at time $t$: enforces equipment was on for at least $h^\text{on}_\text{min}$ prior to the switch
+- Prevents short cycling and frequent startups
+
+**Maximum Consecutive On-Time:**
+
+Limits continuous operation before requiring shutdown:
+
+$$\label{eq:onoff_max_on_duration}
+d^\text{on}(t) \leq h^\text{on}_\text{max} \quad \forall t
+$$
+
+**Use cases:**
+- Mandatory maintenance intervals
+- Process batch time limits
+- Thermal cycling requirements
+
+---
+
+### Consecutive Shutdown Hours
+
+**Minimum Consecutive Off-Time:**
+
+Enforces minimum shutdown duration before restarting:
+
+$$\label{eq:onoff_min_off_duration}
+d^\text{off}(t) \geq (s(t) - s(t-1)) \cdot h^\text{off}_\text{min} \quad \forall t > 0
+$$
+
+With:
+- $d^\text{off}(t)$ being the consecutive off-time duration at time $t$
+- $h^\text{off}_\text{min}$ being the minimum required off-time
+
+**Use cases:**
+- Cooling periods
+- Maintenance requirements
+- Process stabilization
+
+**Maximum Consecutive Off-Time:**
+
+Limits shutdown duration before mandatory restart:
+
+$$\label{eq:onoff_max_off_duration}
+d^\text{off}(t) \leq h^\text{off}_\text{max} \quad \forall t
+$$
+
+**Use cases:**
+- Equipment preservation requirements
+- Process stability needs
+- Contractual minimum activity levels
+
+---
+
+## Cycling Limits
+
+Maximum number of startups across the planning horizon:
+
+$$\label{eq:onoff_max_switches}
+\sum_{t} s^\text{on}(t) \leq n_\text{max}
+$$
+
+With:
+- $n_\text{max}$ being the maximum allowed number of startups
+
+**Use cases:**
+- Preventing excessive equipment wear
+- Grid stability requirements
+- Operational complexity limits
+- Maintenance budget constraints
+
+---
+
+## Integration with Flow Bounds
+
+OnOffParameters modify flow rate bounds by coupling them to the on/off state.
+
+**Without OnOffParameters** (continuous operation):
+$$
+P \cdot \text{rel}_\text{lower} \leq p(t) \leq P \cdot \text{rel}_\text{upper}
+$$
+
+**With OnOffParameters** (binary operation):
+$$
+s(t) \cdot P \cdot \max(\varepsilon, \text{rel}_\text{lower}) \leq p(t) \leq s(t) \cdot P \cdot \text{rel}_\text{upper}
+$$
+
+Using the **bounds with state** pattern from [Bounds and States](../modeling-patterns/bounds-and-states.md#bounds-with-state).
+
+**Behavior:**
+- When $s(t) = 0$: flow is forced to zero
+- When $s(t) = 1$: flow follows normal bounds
+
+---
+
+## Complete Formulation Summary
+
+For equipment with OnOffParameters, the complete constraint system includes:
+
+1. **State variable:** $s(t) \in \{0, 1\}$
+2. **Switch tracking:** $s^\text{on}(t) - s^\text{off}(t) = s(t) - s(t-1)$
+3. **Switch exclusivity:** $s^\text{on}(t) + s^\text{off}(t) \leq 1$
+4. **Duration tracking:**
+ - On-duration: $d^\text{on}(t)$ following duration tracking pattern
+ - Off-duration: $d^\text{off}(t)$ following duration tracking pattern
+5. **Minimum on-time:** $d^\text{on}(t) \geq (s(t-1) - s(t)) \cdot h^\text{on}_\text{min}$
+6. **Maximum on-time:** $d^\text{on}(t) \leq h^\text{on}_\text{max}$
+7. **Minimum off-time:** $d^\text{off}(t) \geq (s(t) - s(t-1)) \cdot h^\text{off}_\text{min}$
+8. **Maximum off-time:** $d^\text{off}(t) \leq h^\text{off}_\text{max}$
+9. **Total hours:** $h_\text{min} \leq \sum_t s(t) \cdot \Delta t \leq h_\text{max}$
+10. **Cycling limit:** $\sum_t s^\text{on}(t) \leq n_\text{max}$
+11. **Flow bounds:** $s(t) \cdot P \cdot \text{rel}_\text{lower} \leq p(t) \leq s(t) \cdot P \cdot \text{rel}_\text{upper}$
+
+---
+
+## Implementation
+
+**Python Class:** [`OnOffParameters`][flixopt.interface.OnOffParameters]
+
+**Key Parameters:**
+- `effects_per_switch_on`: Costs per startup event
+- `effects_per_running_hour`: Costs per hour of operation
+- `on_hours_total_min`, `on_hours_total_max`: Total runtime bounds
+- `consecutive_on_hours_min`, `consecutive_on_hours_max`: Consecutive runtime bounds
+- `consecutive_off_hours_min`, `consecutive_off_hours_max`: Consecutive shutdown bounds
+- `switch_on_total_max`: Maximum number of startups
+- `force_switch_on`: Create switch variables even without limits (for tracking)
+
+See the [`OnOffParameters`][flixopt.interface.OnOffParameters] API documentation for complete parameter list and usage examples.
+
+**Mathematical Patterns Used:**
+- [State Transitions](../modeling-patterns/state-transitions.md#binary-state-transitions) - Switch tracking
+- [Duration Tracking](../modeling-patterns/duration-tracking.md) - Consecutive time constraints
+- [Bounds with State](../modeling-patterns/bounds-and-states.md#bounds-with-state) - Flow control
+
+**Used in:**
+- [`Flow`][flixopt.elements.Flow] - On/off operation for flows
+- All components supporting discrete operational states
+
+---
+
+## Examples
+
+### Power Plant with Startup Costs
+```python
+power_plant = OnOffParameters(
+ effects_per_switch_on={'startup_cost': 25000}, # €25k per startup
+ effects_per_running_hour={'fixed_om': 125}, # €125/hour while running
+ consecutive_on_hours_min=8, # Minimum 8-hour run
+ consecutive_off_hours_min=4, # 4-hour cooling period
+ on_hours_total_max=6000, # Annual limit
+)
+```
+
+### Batch Process with Cycling Limits
+```python
+batch_reactor = OnOffParameters(
+ effects_per_switch_on={'setup_cost': 1500},
+ consecutive_on_hours_min=12, # 12-hour minimum batch
+ consecutive_on_hours_max=24, # 24-hour maximum batch
+ consecutive_off_hours_min=6, # Cleaning time
+ switch_on_total_max=200, # Max 200 batches
+)
+```
+
+### HVAC with Cycle Prevention
+```python
+hvac = OnOffParameters(
+ effects_per_switch_on={'compressor_wear': 0.5},
+ consecutive_on_hours_min=1, # Prevent short cycling
+ consecutive_off_hours_min=0.5, # 30-min minimum off
+ switch_on_total_max=2000, # Limit compressor starts
+)
+```
+
+### Backup Generator with Testing Requirements
+```python
+backup_gen = OnOffParameters(
+ effects_per_switch_on={'fuel_priming': 50}, # L diesel
+ consecutive_on_hours_min=0.5, # 30-min test duration
+ consecutive_off_hours_max=720, # Test every 30 days
+ on_hours_total_min=26, # Weekly testing requirement
+)
+```
+
+---
+
+## Notes
+
+**Time Series Boundary:** The final time period constraints for consecutive_on_hours_min/max and consecutive_off_hours_min/max are not enforced at the end of the planning horizon. This allows optimization to end with ongoing campaigns that may be shorter/longer than specified, as they extend beyond the modeled period.
diff --git a/docs/user-guide/Mathematical Notation/Piecewise.md b/docs/user-guide/mathematical-notation/features/Piecewise.md
similarity index 100%
rename from docs/user-guide/Mathematical Notation/Piecewise.md
rename to docs/user-guide/mathematical-notation/features/Piecewise.md
diff --git a/docs/user-guide/mathematical-notation/index.md b/docs/user-guide/mathematical-notation/index.md
new file mode 100644
index 000000000..ae89f3b67
--- /dev/null
+++ b/docs/user-guide/mathematical-notation/index.md
@@ -0,0 +1,123 @@
+
+# Mathematical Notation
+
+This section provides the **mathematical formulations** underlying FlixOpt's optimization models. It is intended as **reference documentation** for users who want to understand the mathematical details behind the high-level FlixOpt API described in the [FlixOpt Concepts](../index.md) guide.
+
+**For typical usage**, refer to the [FlixOpt Concepts](../index.md) guide, [Examples](../../examples/), and [API Reference](../../api-reference/) - you don't need to understand these mathematical formulations to use FlixOpt effectively.
+
+---
+
+## Naming Conventions
+
+FlixOpt uses the following naming conventions:
+
+- All optimization variables are denoted by italic letters (e.g., $x$, $y$, $z$)
+- All parameters and constants are denoted by non italic small letters (e.g., $\text{a}$, $\text{b}$, $\text{c}$)
+- All Sets are denoted by greek capital letters (e.g., $\mathcal{F}$, $\mathcal{E}$)
+- All units of a set are denoted by greek small letters (e.g., $\mathcal{f}$, $\mathcal{e}$)
+- The letter $i$ is used to denote an index (e.g., $i=1,\dots,\text n$)
+- All time steps are denoted by the letter $\text{t}$ (e.g., $\text{t}_0$, $\text{t}_1$, $\text{t}_i$)
+
+## Dimensions and Time Steps
+
+FlixOpt supports multi-dimensional optimization with up to three dimensions: **time** (mandatory), **period** (optional), and **scenario** (optional).
+
+**All mathematical formulations in this documentation are independent of whether periods or scenarios are present.** The equations shown are written with time index $\text{t}_i$ only, but automatically expand to additional dimensions when periods/scenarios are added.
+
+For complete details on dimensions, their relationships, and influence on formulations, see **[Dimensions](dimensions.md)**.
+
+### Time Steps
+
+Time steps are defined as a sequence of discrete time steps $\text{t}_i \in \mathcal{T} \quad \text{for} \quad i \in \{1, 2, \dots, \text{n}\}$ (left-aligned in its timespan).
+From this sequence, the corresponding time intervals $\Delta \text{t}_i \in \Delta \mathcal{T}$ are derived as
+
+$$\Delta \text{t}_i = \text{t}_{i+1} - \text{t}_i \quad \text{for} \quad i \in \{1, 2, \dots, \text{n}-1\}$$
+
+The final time interval $\Delta \text{t}_\text n$ defaults to $\Delta \text{t}_\text n = \Delta \text{t}_{\text n-1}$, but is of course customizable.
+Non-equidistant time steps are also supported.
+
+---
+
+## Documentation Structure
+
+This reference is organized to match the FlixOpt API structure:
+
+### Elements
+Mathematical formulations for core FlixOpt elements (corresponding to [`flixopt.elements`][flixopt.elements]):
+
+- [Flow](elements/Flow.md) - Flow rate constraints and bounds
+- [Bus](elements/Bus.md) - Nodal balance equations
+- [Storage](elements/Storage.md) - Storage balance and charge state evolution
+- [LinearConverter](elements/LinearConverter.md) - Linear conversion relationships
+
+**User API:** When you create a `Flow`, `Bus`, `Storage`, or `LinearConverter` in your FlixOpt model, these mathematical formulations are automatically applied.
+
+### Features
+Mathematical formulations for optional features (corresponding to parameters in FlixOpt classes):
+
+- [InvestParameters](features/InvestParameters.md) - Investment decision modeling
+- [OnOffParameters](features/OnOffParameters.md) - Binary on/off operation
+- [Piecewise](features/Piecewise.md) - Piecewise linear approximations
+
+**User API:** When you pass `invest_parameters` or `on_off_parameters` to a `Flow` or component, these formulations are applied.
+
+### System-Level
+- [Effects, Penalty & Objective](effects-penalty-objective.md) - Cost allocation and objective function
+
+**User API:** When you create [`Effect`][flixopt.effects.Effect] objects and set `effects_per_flow_hour`, these formulations govern how costs are calculated.
+
+### Modeling Patterns (Advanced)
+**Internal implementation details** - These low-level patterns are used internally by Elements and Features. They are documented here for:
+
+- Developers extending FlixOpt
+- Advanced users debugging models or understanding solver behavior
+- Researchers comparing mathematical formulations
+
+**Normal users do not need to read this section** - the patterns are automatically applied when you use Elements and Features:
+
+- [Bounds and States](modeling-patterns/bounds-and-states.md) - Variable bounding patterns
+- [Duration Tracking](modeling-patterns/duration-tracking.md) - Consecutive time period tracking
+- [State Transitions](modeling-patterns/state-transitions.md) - State change modeling
+
+---
+
+## Quick Reference
+
+### Components Cross-Reference
+
+| Concept | Documentation | Python Class |
+|---------|---------------|--------------|
+| **Flow rate bounds** | [Flow](elements/Flow.md) | [`Flow`][flixopt.elements.Flow] |
+| **Bus balance** | [Bus](elements/Bus.md) | [`Bus`][flixopt.elements.Bus] |
+| **Storage balance** | [Storage](elements/Storage.md) | [`Storage`][flixopt.components.Storage] |
+| **Linear conversion** | [LinearConverter](elements/LinearConverter.md) | [`LinearConverter`][flixopt.components.LinearConverter] |
+
+### Features Cross-Reference
+
+| Concept | Documentation | Python Class |
+|---------|---------------|--------------|
+| **Binary investment** | [InvestParameters](features/InvestParameters.md) | [`InvestParameters`][flixopt.interface.InvestParameters] |
+| **On/off operation** | [OnOffParameters](features/OnOffParameters.md) | [`OnOffParameters`][flixopt.interface.OnOffParameters] |
+| **Piecewise segments** | [Piecewise](features/Piecewise.md) | [`Piecewise`][flixopt.interface.Piecewise] |
+
+### Modeling Patterns Cross-Reference
+
+| Pattern | Documentation | Implementation |
+|---------|---------------|----------------|
+| **Basic bounds** | [bounds-and-states](modeling-patterns/bounds-and-states.md#basic-bounds) | [`BoundingPatterns.basic_bounds()`][flixopt.modeling.BoundingPatterns.basic_bounds] |
+| **Bounds with state** | [bounds-and-states](modeling-patterns/bounds-and-states.md#bounds-with-state) | [`BoundingPatterns.bounds_with_state()`][flixopt.modeling.BoundingPatterns.bounds_with_state] |
+| **Scaled bounds** | [bounds-and-states](modeling-patterns/bounds-and-states.md#scaled-bounds) | [`BoundingPatterns.scaled_bounds()`][flixopt.modeling.BoundingPatterns.scaled_bounds] |
+| **Duration tracking** | [duration-tracking](modeling-patterns/duration-tracking.md) | [`ModelingPrimitives.consecutive_duration_tracking()`][flixopt.modeling.ModelingPrimitives.consecutive_duration_tracking] |
+| **State transitions** | [state-transitions](modeling-patterns/state-transitions.md) | [`BoundingPatterns.state_transition_bounds()`][flixopt.modeling.BoundingPatterns.state_transition_bounds] |
+
+### Python Class Lookup
+
+| Class | Documentation | API Reference |
+|-------|---------------|---------------|
+| `Flow` | [Flow](elements/Flow.md) | [`Flow`][flixopt.elements.Flow] |
+| `Bus` | [Bus](elements/Bus.md) | [`Bus`][flixopt.elements.Bus] |
+| `Storage` | [Storage](elements/Storage.md) | [`Storage`][flixopt.components.Storage] |
+| `LinearConverter` | [LinearConverter](elements/LinearConverter.md) | [`LinearConverter`][flixopt.components.LinearConverter] |
+| `InvestParameters` | [InvestParameters](features/InvestParameters.md) | [`InvestParameters`][flixopt.interface.InvestParameters] |
+| `OnOffParameters` | [OnOffParameters](features/OnOffParameters.md) | [`OnOffParameters`][flixopt.interface.OnOffParameters] |
+| `Piecewise` | [Piecewise](features/Piecewise.md) | [`Piecewise`][flixopt.interface.Piecewise] |
diff --git a/docs/user-guide/mathematical-notation/modeling-patterns/bounds-and-states.md b/docs/user-guide/mathematical-notation/modeling-patterns/bounds-and-states.md
new file mode 100644
index 000000000..d5821948f
--- /dev/null
+++ b/docs/user-guide/mathematical-notation/modeling-patterns/bounds-and-states.md
@@ -0,0 +1,165 @@
+# Bounds and States
+
+This document describes the mathematical formulations for variable bounding patterns used throughout FlixOpt. These patterns define how optimization variables are constrained, both with and without state control.
+
+## Basic Bounds
+
+The simplest bounding pattern constrains a variable between lower and upper bounds.
+
+$$\label{eq:basic_bounds}
+\text{lower} \leq v \leq \text{upper}
+$$
+
+With:
+- $v$ being the optimization variable
+- $\text{lower}$ being the lower bound (constant or time-dependent)
+- $\text{upper}$ being the upper bound (constant or time-dependent)
+
+**Implementation:** [`BoundingPatterns.basic_bounds()`][flixopt.modeling.BoundingPatterns.basic_bounds]
+
+**Used in:**
+- Storage charge state bounds (see [Storage](../elements/Storage.md))
+- Flow rate absolute bounds
+
+---
+
+## Bounds with State
+
+When a variable should only be non-zero if a binary state variable is active (e.g., on/off operation, investment decisions), the bounds are controlled by the state:
+
+$$\label{eq:bounds_with_state}
+s \cdot \max(\varepsilon, \text{lower}) \leq v \leq s \cdot \text{upper}
+$$
+
+With:
+- $v$ being the optimization variable
+- $s \in \{0, 1\}$ being the binary state variable
+- $\text{lower}$ being the lower bound when active
+- $\text{upper}$ being the upper bound when active
+- $\varepsilon$ being a small positive number to ensure numerical stability
+
+**Behavior:**
+- When $s = 0$: variable is forced to zero ($0 \leq v \leq 0$)
+- When $s = 1$: variable can take values in $[\text{lower}, \text{upper}]$
+
+**Implementation:** [`BoundingPatterns.bounds_with_state()`][flixopt.modeling.BoundingPatterns.bounds_with_state]
+
+**Used in:**
+- Flow rates with on/off operation (see [OnOffParameters](../features/OnOffParameters.md))
+- Investment size decisions (see [InvestParameters](../features/InvestParameters.md))
+
+---
+
+## Scaled Bounds
+
+When a variable's bounds depend on another variable (e.g., flow rate scaled by component size), scaled bounds are used:
+
+$$\label{eq:scaled_bounds}
+v_\text{scale} \cdot \text{rel}_\text{lower} \leq v \leq v_\text{scale} \cdot \text{rel}_\text{upper}
+$$
+
+With:
+- $v$ being the optimization variable (e.g., flow rate)
+- $v_\text{scale}$ being the scaling variable (e.g., component size)
+- $\text{rel}_\text{lower}$ being the relative lower bound factor (typically 0)
+- $\text{rel}_\text{upper}$ being the relative upper bound factor (typically 1)
+
+**Example:** Flow rate bounds
+- If $v_\text{scale} = P$ (flow size) and $\text{rel}_\text{upper} = 1$
+- Then: $0 \leq p(t_i) \leq P$ (see [Flow](../elements/Flow.md))
+
+**Implementation:** [`BoundingPatterns.scaled_bounds()`][flixopt.modeling.BoundingPatterns.scaled_bounds]
+
+**Used in:**
+- Flow rate constraints (see [Flow](../elements/Flow.md) equation 1)
+- Storage charge state constraints (see [Storage](../elements/Storage.md) equation 1)
+
+---
+
+## Scaled Bounds with State
+
+Combining scaled bounds with binary state control requires a Big-M formulation to handle both the scaling and the on/off behavior:
+
+$$\label{eq:scaled_bounds_with_state_1}
+(s - 1) \cdot M_\text{misc} + v_\text{scale} \cdot \text{rel}_\text{lower} \leq v \leq v_\text{scale} \cdot \text{rel}_\text{upper}
+$$
+
+$$\label{eq:scaled_bounds_with_state_2}
+s \cdot M_\text{lower} \leq v \leq s \cdot M_\text{upper}
+$$
+
+With:
+- $v$ being the optimization variable
+- $v_\text{scale}$ being the scaling variable
+- $s \in \{0, 1\}$ being the binary state variable
+- $\text{rel}_\text{lower}$ being the relative lower bound factor
+- $\text{rel}_\text{upper}$ being the relative upper bound factor
+- $M_\text{misc} = v_\text{scale,max} \cdot \text{rel}_\text{lower}$
+- $M_\text{upper} = v_\text{scale,max} \cdot \text{rel}_\text{upper}$
+- $M_\text{lower} = \max(\varepsilon, v_\text{scale,min} \cdot \text{rel}_\text{lower})$
+
+Where $v_\text{scale,max}$ and $v_\text{scale,min}$ are the maximum and minimum possible values of the scaling variable.
+
+**Behavior:**
+- When $s = 0$: variable is forced to zero
+- When $s = 1$: variable follows scaled bounds $v_\text{scale} \cdot \text{rel}_\text{lower} \leq v \leq v_\text{scale} \cdot \text{rel}_\text{upper}$
+
+**Implementation:** [`BoundingPatterns.scaled_bounds_with_state()`][flixopt.modeling.BoundingPatterns.scaled_bounds_with_state]
+
+**Used in:**
+- Flow rates with on/off operation and investment sizing
+- Components combining [OnOffParameters](../features/OnOffParameters.md) and [InvestParameters](../features/InvestParameters.md)
+
+---
+
+## Expression Tracking
+
+Sometimes it's necessary to create an auxiliary variable that equals an expression:
+
+$$\label{eq:expression_tracking}
+v_\text{tracker} = \text{expression}
+$$
+
+With optional bounds:
+
+$$\label{eq:expression_tracking_bounds}
+\text{lower} \leq v_\text{tracker} \leq \text{upper}
+$$
+
+With:
+- $v_\text{tracker}$ being the auxiliary tracking variable
+- $\text{expression}$ being a linear expression of other variables
+- $\text{lower}, \text{upper}$ being optional bounds on the tracker
+
+**Use cases:**
+- Creating named variables for complex expressions
+- Bounding intermediate results
+- Simplifying constraint formulations
+
+**Implementation:** [`ModelingPrimitives.expression_tracking_variable()`][flixopt.modeling.ModelingPrimitives.expression_tracking_variable]
+
+---
+
+## Mutual Exclusivity
+
+When multiple binary variables should not be active simultaneously (at most one can be 1):
+
+$$\label{eq:mutual_exclusivity}
+\sum_{i} s_i(t) \leq \text{tolerance} \quad \forall t
+$$
+
+With:
+- $s_i(t) \in \{0, 1\}$ being binary state variables
+- $\text{tolerance}$ being the maximum number of simultaneously active states (typically 1)
+- $t$ being the time index
+
+**Use cases:**
+- Ensuring only one operating mode is active
+- Mutual exclusion of operation and maintenance states
+- Enforcing single-choice decisions
+
+**Implementation:** [`ModelingPrimitives.mutual_exclusivity_constraint()`][flixopt.modeling.ModelingPrimitives.mutual_exclusivity_constraint]
+
+**Used in:**
+- Operating mode selection
+- Piecewise linear function segments (see [Piecewise](../features/Piecewise.md))
diff --git a/docs/user-guide/mathematical-notation/modeling-patterns/duration-tracking.md b/docs/user-guide/mathematical-notation/modeling-patterns/duration-tracking.md
new file mode 100644
index 000000000..5d430d28c
--- /dev/null
+++ b/docs/user-guide/mathematical-notation/modeling-patterns/duration-tracking.md
@@ -0,0 +1,159 @@
+# Duration Tracking
+
+Duration tracking allows monitoring how long a binary state has been consecutively active. This is essential for modeling minimum run times, ramp-up periods, and similar time-dependent constraints.
+
+## Consecutive Duration Tracking
+
+For a binary state variable $s(t) \in \{0, 1\}$, the consecutive duration $d(t)$ tracks how long the state has been continuously active.
+
+### Duration Upper Bound
+
+The duration cannot exceed zero when the state is inactive:
+
+$$\label{eq:duration_upper}
+d(t) \leq s(t) \cdot M \quad \forall t
+$$
+
+With:
+- $d(t)$ being the duration variable (continuous, non-negative)
+- $s(t) \in \{0, 1\}$ being the binary state variable
+- $M$ being a sufficiently large constant (big-M)
+
+**Behavior:**
+- When $s(t) = 0$: forces $d(t) \leq 0$, thus $d(t) = 0$
+- When $s(t) = 1$: allows $d(t)$ to be positive
+
+---
+
+### Duration Accumulation
+
+While the state is active, the duration increases by the time step size:
+
+$$\label{eq:duration_accumulation_upper}
+d(t+1) \leq d(t) + \Delta d(t) \quad \forall t
+$$
+
+$$\label{eq:duration_accumulation_lower}
+d(t+1) \geq d(t) + \Delta d(t) + (s(t+1) - 1) \cdot M \quad \forall t
+$$
+
+With:
+- $\Delta d(t)$ being the duration increment for time step $t$ (typically $\Delta t_i$ from the time series)
+- $M$ being a sufficiently large constant
+
+**Behavior:**
+- When $s(t+1) = 1$: both inequalities enforce $d(t+1) = d(t) + \Delta d(t)$
+- When $s(t+1) = 0$: only the upper bound applies, and $d(t+1) = 0$ (from equation $\eqref{eq:duration_upper}$)
+
+---
+
+### Initial Duration
+
+The duration at the first time step depends on both the state and any previous duration:
+
+$$\label{eq:duration_initial}
+d(0) = (\Delta d(0) + d_\text{prev}) \cdot s(0)
+$$
+
+With:
+- $d_\text{prev}$ being the duration from before the optimization period
+- $\Delta d(0)$ being the duration increment for the first time step
+
+**Behavior:**
+- When $s(0) = 1$: duration continues from previous period
+- When $s(0) = 0$: duration resets to zero
+
+---
+
+### Complete Formulation
+
+Combining all constraints:
+
+$$
+\begin{align}
+d(t) &\leq s(t) \cdot M && \forall t \label{eq:duration_complete_1} \\
+d(t+1) &\leq d(t) + \Delta d(t) && \forall t \label{eq:duration_complete_2} \\
+d(t+1) &\geq d(t) + \Delta d(t) + (s(t+1) - 1) \cdot M && \forall t \label{eq:duration_complete_3} \\
+d(0) &= (\Delta d(0) + d_\text{prev}) \cdot s(0) && \label{eq:duration_complete_4}
+\end{align}
+$$
+
+---
+
+## Minimum Duration Constraints
+
+To enforce a minimum consecutive duration (e.g., minimum run time), an additional constraint links the duration to state changes:
+
+$$\label{eq:minimum_duration}
+d(t) \geq (s(t-1) - s(t)) \cdot d_\text{min}(t-1) \quad \forall t > 0
+$$
+
+With:
+- $d_\text{min}(t)$ being the required minimum duration at time $t$
+
+**Behavior:**
+- When shutting down ($s(t-1) = 1, s(t) = 0$): enforces $d(t-1) \geq d_\text{min}(t-1)$
+- This ensures the state was active for at least $d_\text{min}$ before turning off
+- When state is constant or turning on: constraint is non-binding
+
+---
+
+## Implementation
+
+**Function:** [`ModelingPrimitives.consecutive_duration_tracking()`][flixopt.modeling.ModelingPrimitives.consecutive_duration_tracking]
+
+See the API documentation for complete parameter list and usage details.
+
+---
+
+## Use Cases
+
+### Minimum Run Time
+
+Ensuring equipment runs for a minimum duration once started:
+
+```python
+# State: 1 when running, 0 when off
+# Require at least 2 hours of operation
+duration = modeling.consecutive_duration_tracking(
+ state_variable=on_state,
+ duration_per_step=time_step_hours,
+ minimum_duration=2.0
+)
+```
+
+### Ramp-Up Tracking
+
+Tracking time since startup for gradual ramp-up constraints:
+
+```python
+# Track startup duration
+startup_duration = modeling.consecutive_duration_tracking(
+ state_variable=on_state,
+ duration_per_step=time_step_hours
+)
+# Constrain output based on startup duration
+# (additional constraints would link output to startup_duration)
+```
+
+### Cooldown Requirements
+
+Tracking time in a state before allowing transitions:
+
+```python
+# Track maintenance duration
+maintenance_duration = modeling.consecutive_duration_tracking(
+ state_variable=maintenance_state,
+ duration_per_step=time_step_hours,
+ minimum_duration=scheduled_maintenance_hours
+)
+```
+
+---
+
+## Used In
+
+This pattern is used in:
+- [`OnOffParameters`](../features/OnOffParameters.md) - Minimum on/off times
+- Operating mode constraints with minimum durations
+- Startup/shutdown sequence modeling
diff --git a/docs/user-guide/mathematical-notation/modeling-patterns/index.md b/docs/user-guide/mathematical-notation/modeling-patterns/index.md
new file mode 100644
index 000000000..15ff8dbd2
--- /dev/null
+++ b/docs/user-guide/mathematical-notation/modeling-patterns/index.md
@@ -0,0 +1,54 @@
+# Modeling Patterns
+
+This section documents the fundamental mathematical patterns used throughout FlixOpt for constructing optimization models. These patterns are implemented in `flixopt.modeling` and provide reusable building blocks for creating constraints.
+
+## Overview
+
+The modeling patterns are organized into three categories:
+
+1. **[Bounds and States](bounds-and-states.md)** - Variable bounding with optional state control
+2. **[Duration Tracking](duration-tracking.md)** - Tracking consecutive durations of states
+3. **[State Transitions](state-transitions.md)** - Modeling state changes and transitions
+
+## Pattern Categories
+
+### Bounding Patterns
+
+These patterns define how optimization variables are constrained within bounds:
+
+- **Basic Bounds** - Simple upper and lower bounds on variables
+- **Bounds with State** - Binary-controlled bounds (on/off states)
+- **Scaled Bounds** - Bounds dependent on another variable (e.g., size)
+- **Scaled Bounds with State** - Combination of scaling and binary control
+
+### Tracking Patterns
+
+These patterns track properties over time:
+
+- **Expression Tracking** - Creating auxiliary variables that track expressions
+- **Consecutive Duration Tracking** - Tracking how long a state has been active
+- **Mutual Exclusivity** - Ensuring only one of multiple options is active
+
+### Transition Patterns
+
+These patterns model changes between states:
+
+- **State Transitions** - Tracking switches between binary states (on→off, off→on)
+- **Continuous Transitions** - Linking continuous variable changes to switches
+- **Level Changes with Binaries** - Controlled increases/decreases in levels
+
+## Usage in Components
+
+These patterns are used throughout FlixOpt components:
+
+- [`Flow`][flixopt.elements.Flow] uses **scaled bounds with state** for flow rate constraints
+- [`Storage`][flixopt.components.Storage] uses **basic bounds** for charge state
+- [`OnOffParameters`](../features/OnOffParameters.md) uses **state transitions** for startup/shutdown
+- [`InvestParameters`](../features/InvestParameters.md) uses **bounds with state** for investment decisions
+
+## Implementation
+
+All patterns are implemented in [`flixopt.modeling`][flixopt.modeling] module:
+
+- [`ModelingPrimitives`][flixopt.modeling.ModelingPrimitives] - Core constraint patterns
+- [`BoundingPatterns`][flixopt.modeling.BoundingPatterns] - Specialized bounding patterns
diff --git a/docs/user-guide/mathematical-notation/modeling-patterns/state-transitions.md b/docs/user-guide/mathematical-notation/modeling-patterns/state-transitions.md
new file mode 100644
index 000000000..dc75a8008
--- /dev/null
+++ b/docs/user-guide/mathematical-notation/modeling-patterns/state-transitions.md
@@ -0,0 +1,227 @@
+# State Transitions
+
+State transition patterns model changes between discrete states and link them to continuous variables. These patterns are essential for modeling startup/shutdown events, switching behavior, and controlled changes in system operation.
+
+## Binary State Transitions
+
+For a binary state variable $s(t) \in \{0, 1\}$, state transitions track when the state switches on or off.
+
+### Switch Variables
+
+Two binary variables track the transitions:
+- $s^\text{on}(t) \in \{0, 1\}$: equals 1 when switching from off to on
+- $s^\text{off}(t) \in \{0, 1\}$: equals 1 when switching from on to off
+
+### Transition Tracking
+
+The state change equals the difference between switch-on and switch-off:
+
+$$\label{eq:state_transition}
+s^\text{on}(t) - s^\text{off}(t) = s(t) - s(t-1) \quad \forall t > 0
+$$
+
+$$\label{eq:state_transition_initial}
+s^\text{on}(0) - s^\text{off}(0) = s(0) - s_\text{prev}
+$$
+
+With:
+- $s(t)$ being the binary state variable
+- $s_\text{prev}$ being the state before the optimization period
+- $s^\text{on}(t), s^\text{off}(t)$ being the switch variables
+
+**Behavior:**
+- Off → On ($s(t-1)=0, s(t)=1$): $s^\text{on}(t)=1, s^\text{off}(t)=0$
+- On → Off ($s(t-1)=1, s(t)=0$): $s^\text{on}(t)=0, s^\text{off}(t)=1$
+- No change: $s^\text{on}(t)=0, s^\text{off}(t)=0$
+
+---
+
+### Mutual Exclusivity of Switches
+
+A state cannot switch on and off simultaneously:
+
+$$\label{eq:switch_exclusivity}
+s^\text{on}(t) + s^\text{off}(t) \leq 1 \quad \forall t
+$$
+
+This ensures:
+- At most one switch event per time step
+- No simultaneous on/off switching
+
+---
+
+### Complete State Transition Formulation
+
+$$
+\begin{align}
+s^\text{on}(t) - s^\text{off}(t) &= s(t) - s(t-1) && \forall t > 0 \label{eq:transition_complete_1} \\
+s^\text{on}(0) - s^\text{off}(0) &= s(0) - s_\text{prev} && \label{eq:transition_complete_2} \\
+s^\text{on}(t) + s^\text{off}(t) &\leq 1 && \forall t \label{eq:transition_complete_3} \\
+s^\text{on}(t), s^\text{off}(t) &\in \{0, 1\} && \forall t \label{eq:transition_complete_4}
+\end{align}
+$$
+
+**Implementation:** [`BoundingPatterns.state_transition_bounds()`][flixopt.modeling.BoundingPatterns.state_transition_bounds]
+
+---
+
+## Continuous Transitions
+
+When a continuous variable should only change when certain switch events occur, continuous transition bounds link the variable changes to binary switches.
+
+### Change Bounds with Switches
+
+$$\label{eq:continuous_transition}
+-\Delta v^\text{max} \cdot (s^\text{on}(t) + s^\text{off}(t)) \leq v(t) - v(t-1) \leq \Delta v^\text{max} \cdot (s^\text{on}(t) + s^\text{off}(t)) \quad \forall t > 0
+$$
+
+$$\label{eq:continuous_transition_initial}
+-\Delta v^\text{max} \cdot (s^\text{on}(0) + s^\text{off}(0)) \leq v(0) - v_\text{prev} \leq \Delta v^\text{max} \cdot (s^\text{on}(0) + s^\text{off}(0))
+$$
+
+With:
+- $v(t)$ being the continuous variable
+- $v_\text{prev}$ being the value before the optimization period
+- $\Delta v^\text{max}$ being the maximum allowed change
+- $s^\text{on}(t), s^\text{off}(t) \in \{0, 1\}$ being switch binary variables
+
+**Behavior:**
+- When $s^\text{on}(t) = 0$ and $s^\text{off}(t) = 0$: forces $v(t) = v(t-1)$ (no change)
+- When $s^\text{on}(t) = 1$ or $s^\text{off}(t) = 1$: allows change up to $\pm \Delta v^\text{max}$
+
+**Implementation:** [`BoundingPatterns.continuous_transition_bounds()`][flixopt.modeling.BoundingPatterns.continuous_transition_bounds]
+
+---
+
+## Level Changes with Binaries
+
+This pattern models a level variable that can increase or decrease, with changes controlled by binary variables. This is useful for inventory management, capacity adjustments, or gradual state changes.
+
+### Level Evolution
+
+The level evolves based on increases and decreases:
+
+$$\label{eq:level_initial}
+\ell(0) = \ell_\text{init} + \ell^\text{inc}(0) - \ell^\text{dec}(0)
+$$
+
+$$\label{eq:level_evolution}
+\ell(t) = \ell(t-1) + \ell^\text{inc}(t) - \ell^\text{dec}(t) \quad \forall t > 0
+$$
+
+With:
+- $\ell(t)$ being the level variable
+- $\ell_\text{init}$ being the initial level
+- $\ell^\text{inc}(t)$ being the increase in level at time $t$ (non-negative)
+- $\ell^\text{dec}(t)$ being the decrease in level at time $t$ (non-negative)
+
+---
+
+### Change Bounds with Binary Control
+
+Changes are bounded and controlled by binary variables:
+
+$$\label{eq:increase_bound}
+\ell^\text{inc}(t) \leq \Delta \ell^\text{max} \cdot b^\text{inc}(t) \quad \forall t
+$$
+
+$$\label{eq:decrease_bound}
+\ell^\text{dec}(t) \leq \Delta \ell^\text{max} \cdot b^\text{dec}(t) \quad \forall t
+$$
+
+With:
+- $\Delta \ell^\text{max}$ being the maximum change per time step
+- $b^\text{inc}(t), b^\text{dec}(t) \in \{0, 1\}$ being binary control variables
+
+---
+
+### Mutual Exclusivity of Changes
+
+Simultaneous increase and decrease are prevented:
+
+$$\label{eq:change_exclusivity}
+b^\text{inc}(t) + b^\text{dec}(t) \leq 1 \quad \forall t
+$$
+
+This ensures:
+- Level can only increase OR decrease (or stay constant) in each time step
+- No simultaneous contradictory changes
+
+---
+
+### Complete Level Change Formulation
+
+$$
+\begin{align}
+\ell(0) &= \ell_\text{init} + \ell^\text{inc}(0) - \ell^\text{dec}(0) && \label{eq:level_complete_1} \\
+\ell(t) &= \ell(t-1) + \ell^\text{inc}(t) - \ell^\text{dec}(t) && \forall t > 0 \label{eq:level_complete_2} \\
+\ell^\text{inc}(t) &\leq \Delta \ell^\text{max} \cdot b^\text{inc}(t) && \forall t \label{eq:level_complete_3} \\
+\ell^\text{dec}(t) &\leq \Delta \ell^\text{max} \cdot b^\text{dec}(t) && \forall t \label{eq:level_complete_4} \\
+b^\text{inc}(t) + b^\text{dec}(t) &\leq 1 && \forall t \label{eq:level_complete_5} \\
+b^\text{inc}(t), b^\text{dec}(t) &\in \{0, 1\} && \forall t \label{eq:level_complete_6}
+\end{align}
+$$
+
+**Implementation:** [`BoundingPatterns.link_changes_to_level_with_binaries()`][flixopt.modeling.BoundingPatterns.link_changes_to_level_with_binaries]
+
+---
+
+## Use Cases
+
+### Startup/Shutdown Costs
+
+Track startup and shutdown events to apply costs:
+
+```python
+# Create switch variables
+switch_on, switch_off = modeling.state_transition_bounds(
+ state_variable=on_state,
+ previous_state=previous_on_state
+)
+
+# Apply costs to switches
+startup_cost = switch_on * startup_cost_per_event
+shutdown_cost = switch_off * shutdown_cost_per_event
+```
+
+### Limited Switching
+
+Restrict the number of state changes:
+
+```python
+# Track all switches
+switch_on, switch_off = modeling.state_transition_bounds(
+ state_variable=on_state
+)
+
+# Limit total switches
+model.add_constraint(
+ (switch_on + switch_off).sum() <= max_switches
+)
+```
+
+### Gradual Capacity Changes
+
+Model systems where capacity can be incrementally adjusted:
+
+```python
+# Level represents installed capacity
+level_var, increase, decrease, inc_binary, dec_binary = \
+ modeling.link_changes_to_level_with_binaries(
+ initial_level=current_capacity,
+ max_change=max_capacity_change_per_period
+ )
+
+# Constrain total increases
+model.add_constraint(increase.sum() <= max_total_expansion)
+```
+
+---
+
+## Used In
+
+These patterns are used in:
+- [`OnOffParameters`](../features/OnOffParameters.md) - Startup/shutdown tracking and costs
+- Operating mode switching with transition costs
+- Investment planning with staged capacity additions
+- Inventory management with controlled stock changes
diff --git a/docs/user-guide/Mathematical Notation/others.md b/docs/user-guide/mathematical-notation/others.md
similarity index 100%
rename from docs/user-guide/Mathematical Notation/others.md
rename to docs/user-guide/mathematical-notation/others.md
diff --git a/docs/user-guide/recipes/index.md b/docs/user-guide/recipes/index.md
new file mode 100644
index 000000000..8ac7d1812
--- /dev/null
+++ b/docs/user-guide/recipes/index.md
@@ -0,0 +1,47 @@
+# Recipes
+
+**Coming Soon!** 🚧
+
+This section will contain quick, copy-paste ready code snippets for common FlixOpt patterns.
+
+---
+
+## What Will Be Here?
+
+Short, focused code snippets showing **how to do specific things** in FlixOpt:
+
+- Common modeling patterns
+- Integration with other tools
+- Performance optimizations
+- Domain-specific solutions
+- Data analysis shortcuts
+
+Unlike full examples, recipes will be focused snippets showing a single concept.
+
+---
+
+## Planned Topics
+
+- **Storage Patterns** - Batteries, thermal storage, seasonal storage
+- **Multi-Criteria Optimization** - Balance multiple objectives
+- **Data I/O** - Loading time series from CSV, databases, APIs
+- **Data Manipulation** - Common xarray operations for parameterization and analysis
+- **Investment Optimization** - Size optimization strategies
+- **Renewable Integration** - Solar, wind capacity optimization
+- **On/Off Constraints** - Minimum runtime, startup costs
+- **Large-Scale Problems** - Segmented and aggregated calculations
+- **Custom Constraints** - Extend models with linopy
+- **Domain-Specific Patterns** - District heating, microgrids, industrial processes
+
+---
+
+## Want to Contribute?
+
+**We need your help!** If you have recurring modeling patterns or clever solutions to share, please contribute via [GitHub issues](https://github.com/flixopt/flixopt/issues) or pull requests.
+
+Guidelines:
+1. Keep it short (< 100 lines of code)
+2. Focus on one specific technique
+3. Add brief explanation and when to use it
+
+Check the [contribution guide](../../contribute.md) for details.
diff --git a/examples/00_Minmal/minimal_example.py b/examples/00_Minmal/minimal_example.py
index e9ef241ff..81b7c2dba 100644
--- a/examples/00_Minmal/minimal_example.py
+++ b/examples/00_Minmal/minimal_example.py
@@ -9,6 +9,9 @@
import flixopt as fx
if __name__ == '__main__':
+ # Enable console logging
+ fx.CONFIG.Logging.console = True
+ fx.CONFIG.apply()
# --- Define the Flow System, that will hold all elements, and the time steps you want to model ---
timesteps = pd.date_range('2020-01-01', periods=3, freq='h')
flow_system = fx.FlowSystem(timesteps)
@@ -37,13 +40,15 @@
# Heat load component with a fixed thermal demand profile
heat_load = fx.Sink(
'Heat Demand',
- sink=fx.Flow(label='Thermal Load', bus='District Heating', size=1, fixed_relative_profile=thermal_load_profile),
+ inputs=[
+ fx.Flow(label='Thermal Load', bus='District Heating', size=1, fixed_relative_profile=thermal_load_profile)
+ ],
)
# Gas source component with cost-effect per flow hour
gas_source = fx.Source(
'Natural Gas Tariff',
- source=fx.Flow(label='Gas Flow', bus='Natural Gas', size=1000, effects_per_flow_hour=0.04), # 0.04 €/kWh
+ outputs=[fx.Flow(label='Gas Flow', bus='Natural Gas', size=1000, effects_per_flow_hour=0.04)], # 0.04 €/kWh
)
# --- Build the Flow System ---
diff --git a/examples/01_Simple/simple_example.py b/examples/01_Simple/simple_example.py
index 8239f805a..ee90af47a 100644
--- a/examples/01_Simple/simple_example.py
+++ b/examples/01_Simple/simple_example.py
@@ -8,6 +8,9 @@
import flixopt as fx
if __name__ == '__main__':
+ # Enable console logging
+ fx.CONFIG.Logging.console = True
+ fx.CONFIG.apply()
# --- Create Time Series Data ---
# Heat demand profile (e.g., kW) over time and corresponding power prices
heat_demand_per_h = np.array([30, 0, 90, 110, 110, 20, 20, 20, 20])
@@ -29,6 +32,7 @@
description='Kosten',
is_standard=True, # standard effect: no explicit value needed for costs
is_objective=True, # Minimizing costs as the optimization objective
+ share_from_temporal={'CO2': 0.2},
)
# CO2 emissions effect with an associated cost impact
@@ -36,8 +40,7 @@
label='CO2',
unit='kg',
description='CO2_e-Emissionen',
- specific_share_to_other_effects_operation={costs.label: 0.2},
- maximum_operation_per_hour=1000, # Max CO2 emissions per hour
+ maximum_per_hour=1000, # Max CO2 emissions per hour
)
# --- Define Flow System Components ---
@@ -64,9 +67,10 @@
label='Storage',
charging=fx.Flow('Q_th_load', bus='Fernwärme', size=1000),
discharging=fx.Flow('Q_th_unload', bus='Fernwärme', size=1000),
- capacity_in_flow_hours=fx.InvestParameters(fix_effects=20, fixed_size=30, optional=False),
+ capacity_in_flow_hours=fx.InvestParameters(effects_of_investment=20, fixed_size=30, mandatory=True),
initial_charge_state=0, # Initial storage state: empty
- relative_maximum_charge_state=1 / 100 * np.array([80, 70, 80, 80, 80, 80, 80, 80, 80, 80]),
+ relative_maximum_charge_state=1 / 100 * np.array([80, 70, 80, 80, 80, 80, 80, 80, 80]),
+ relative_maximum_final_charge_state=0.8,
eta_charge=0.9,
eta_discharge=1, # Efficiency factors for charging/discharging
relative_loss_per_hour=0.08, # 8% loss per hour. Absolute loss depends on current charge state
@@ -76,18 +80,20 @@
# Heat Demand Sink: Represents a fixed heat demand profile
heat_sink = fx.Sink(
label='Heat Demand',
- sink=fx.Flow(label='Q_th_Last', bus='Fernwärme', size=1, fixed_relative_profile=heat_demand_per_h),
+ inputs=[fx.Flow(label='Q_th_Last', bus='Fernwärme', size=1, fixed_relative_profile=heat_demand_per_h)],
)
# Gas Source: Gas tariff source with associated costs and CO2 emissions
gas_source = fx.Source(
label='Gastarif',
- source=fx.Flow(label='Q_Gas', bus='Gas', size=1000, effects_per_flow_hour={costs.label: 0.04, CO2.label: 0.3}),
+ outputs=[
+ fx.Flow(label='Q_Gas', bus='Gas', size=1000, effects_per_flow_hour={costs.label: 0.04, CO2.label: 0.3})
+ ],
)
# Power Sink: Represents the export of electricity to the grid
power_sink = fx.Sink(
- label='Einspeisung', sink=fx.Flow(label='P_el', bus='Strom', effects_per_flow_hour=-1 * power_prices)
+ label='Einspeisung', inputs=[fx.Flow(label='P_el', bus='Strom', effects_per_flow_hour=-1 * power_prices)]
)
# --- Build the Flow System ---
diff --git a/examples/02_Complex/complex_example.py b/examples/02_Complex/complex_example.py
index 175211c26..805cb08f6 100644
--- a/examples/02_Complex/complex_example.py
+++ b/examples/02_Complex/complex_example.py
@@ -9,6 +9,9 @@
import flixopt as fx
if __name__ == '__main__':
+ # Enable console logging
+ fx.CONFIG.Logging.console = True
+ fx.CONFIG.apply()
# --- Experiment Options ---
# Configure options for testing various parameters and behaviors
check_penalty = False
@@ -40,8 +43,8 @@
# --- Define Effects ---
# Specify effects related to costs, CO2 emissions, and primary energy consumption
- Costs = fx.Effect('costs', '€', 'Kosten', is_standard=True, is_objective=True)
- CO2 = fx.Effect('CO2', 'kg', 'CO2_e-Emissionen', specific_share_to_other_effects_operation={Costs.label: 0.2})
+ Costs = fx.Effect('costs', '€', 'Kosten', is_standard=True, is_objective=True, share_from_temporal={'CO2': 0.2})
+ CO2 = fx.Effect('CO2', 'kg', 'CO2_e-Emissionen')
PE = fx.Effect('PE', 'kWh_PE', 'Primärenergie', maximum_total=3.5e3)
# --- Define Components ---
@@ -57,10 +60,10 @@
label='Q_th', # Thermal output
bus='Fernwärme', # Linked bus
size=fx.InvestParameters(
- fix_effects=1000, # Fixed investment costs
+ effects_of_investment=1000, # Fixed investment costs
fixed_size=50, # Fixed size
- optional=False, # Forced investment
- specific_effects={Costs.label: 10, PE.label: 2}, # Specific costs
+ mandatory=True, # Forced investment
+ effects_of_investment_per_size={Costs.label: 10, PE.label: 2}, # Specific costs
),
load_factor_max=1.0, # Maximum load factor (50 kW)
load_factor_min=0.1, # Minimum load factor (5 kW)
@@ -72,9 +75,8 @@
on_hours_total_min=0, # Minimum operating hours
on_hours_total_max=1000, # Maximum operating hours
consecutive_on_hours_max=10, # Max consecutive operating hours
- consecutive_on_hours_min=np.array(
- [1, 1, 1, 1, 1, 2, 2, 2, 2]
- ), # min consecutive operation hoursconsecutive_off_hours_max=10, # Max consecutive off hours
+ consecutive_on_hours_min=np.array([1, 1, 1, 1, 1, 2, 2, 2, 2]), # min consecutive operation hours
+ consecutive_off_hours_max=10, # Max consecutive off hours
effects_per_switch_on=0.01, # Cost per switch-on
switch_on_total_max=1000, # Max number of starts
),
@@ -130,8 +132,8 @@
charging=fx.Flow('Q_th_load', bus='Fernwärme', size=1e4),
discharging=fx.Flow('Q_th_unload', bus='Fernwärme', size=1e4),
capacity_in_flow_hours=fx.InvestParameters(
- piecewise_effects=segmented_investment_effects, # Investment effects
- optional=False, # Forced investment
+ piecewise_effects_of_investment=segmented_investment_effects, # Investment effects
+ mandatory=True, # Forced investment
minimum_size=0,
maximum_size=1000, # Optimizing between 0 and 1000 kWh
),
@@ -147,33 +149,39 @@
# 5.a) Heat demand profile
Waermelast = fx.Sink(
'Wärmelast',
- sink=fx.Flow(
- 'Q_th_Last', # Heat sink
- bus='Fernwärme', # Linked bus
- size=1,
- fixed_relative_profile=heat_demand, # Fixed demand profile
- ),
+ inputs=[
+ fx.Flow(
+ 'Q_th_Last', # Heat sink
+ bus='Fernwärme', # Linked bus
+ size=1,
+ fixed_relative_profile=heat_demand, # Fixed demand profile
+ )
+ ],
)
# 5.b) Gas tariff
Gasbezug = fx.Source(
'Gastarif',
- source=fx.Flow(
- 'Q_Gas',
- bus='Gas', # Gas source
- size=1000, # Nominal size
- effects_per_flow_hour={Costs.label: 0.04, CO2.label: 0.3},
- ),
+ outputs=[
+ fx.Flow(
+ 'Q_Gas',
+ bus='Gas', # Gas source
+ size=1000, # Nominal size
+ effects_per_flow_hour={Costs.label: 0.04, CO2.label: 0.3},
+ )
+ ],
)
# 5.c) Feed-in of electricity
Stromverkauf = fx.Sink(
'Einspeisung',
- sink=fx.Flow(
- 'P_el',
- bus='Strom', # Feed-in tariff for electricity
- effects_per_flow_hour=-1 * electricity_price, # Negative price for feed-in
- ),
+ inputs=[
+ fx.Flow(
+ 'P_el',
+ bus='Strom', # Feed-in tariff for electricity
+ effects_per_flow_hour=-1 * electricity_price, # Negative price for feed-in
+ )
+ ],
)
# --- Build FlowSystem ---
@@ -182,7 +190,10 @@
flow_system.add_elements(bhkw_2) if use_chp_with_piecewise_conversion else flow_system.add_elements(bhkw)
pprint(flow_system) # Get a string representation of the FlowSystem
- flow_system.start_network_app() # Start the network app. DOes only work with extra dependencies installed
+ try:
+ flow_system.start_network_app() # Start the network app
+ except ImportError as e:
+ print(f'Network app requires extra dependencies: {e}')
# --- Solve FlowSystem ---
calculation = fx.FullCalculation('complex example', flow_system, time_indices)
diff --git a/examples/02_Complex/complex_example_results.py b/examples/02_Complex/complex_example_results.py
index 3be201ae8..5020f71fe 100644
--- a/examples/02_Complex/complex_example_results.py
+++ b/examples/02_Complex/complex_example_results.py
@@ -5,6 +5,9 @@
import flixopt as fx
if __name__ == '__main__':
+ # Enable console logging
+ fx.CONFIG.Logging.console = True
+ fx.CONFIG.apply()
# --- Load Results ---
try:
results = fx.results.CalculationResults.from_file('results', 'complex example')
diff --git a/examples/03_Calculation_types/example_calculation_types.py b/examples/03_Calculation_types/example_calculation_types.py
index a92a20163..05b25e782 100644
--- a/examples/03_Calculation_types/example_calculation_types.py
+++ b/examples/03_Calculation_types/example_calculation_types.py
@@ -11,6 +11,9 @@
import flixopt as fx
if __name__ == '__main__':
+ # Enable console logging
+ fx.CONFIG.Logging.console = True
+ fx.CONFIG.apply()
# Calculation Types
full, segmented, aggregated = True, True, True
@@ -30,7 +33,9 @@
excess_penalty = 1e5 # or set to None if not needed
# Data Import
- data_import = pd.read_csv(pathlib.Path('Zeitreihen2020.csv'), index_col=0).sort_index()
+ data_import = pd.read_csv(
+ pathlib.Path(__file__).parent.parent / 'resources' / 'Zeitreihen2020.csv', index_col=0
+ ).sort_index()
filtered_data = data_import['2020-01-01':'2020-01-02 23:45:00']
# filtered_data = data_import[0:500] # Alternatively filter by index
@@ -45,9 +50,9 @@
# TimeSeriesData objects
TS_heat_demand = fx.TimeSeriesData(heat_demand)
- TS_electricity_demand = fx.TimeSeriesData(electricity_demand, agg_weight=0.7)
- TS_electricity_price_sell = fx.TimeSeriesData(-(electricity_demand - 0.5), agg_group='p_el')
- TS_electricity_price_buy = fx.TimeSeriesData(electricity_price + 0.5, agg_group='p_el')
+ TS_electricity_demand = fx.TimeSeriesData(electricity_demand, aggregation_weight=0.7)
+ TS_electricity_price_sell = fx.TimeSeriesData(-(electricity_price - 0.5), aggregation_group='p_el')
+ TS_electricity_price_buy = fx.TimeSeriesData(electricity_price + 0.5, aggregation_group='p_el')
flow_system = fx.FlowSystem(timesteps)
flow_system.add_elements(
@@ -108,36 +113,43 @@
# 4. Sinks and Sources
# Heat Load Profile
a_waermelast = fx.Sink(
- 'Wärmelast', sink=fx.Flow('Q_th_Last', bus='Fernwärme', size=1, fixed_relative_profile=TS_heat_demand)
+ 'Wärmelast', inputs=[fx.Flow('Q_th_Last', bus='Fernwärme', size=1, fixed_relative_profile=TS_heat_demand)]
)
# Electricity Feed-in
a_strom_last = fx.Sink(
- 'Stromlast', sink=fx.Flow('P_el_Last', bus='Strom', size=1, fixed_relative_profile=TS_electricity_demand)
+ 'Stromlast', inputs=[fx.Flow('P_el_Last', bus='Strom', size=1, fixed_relative_profile=TS_electricity_demand)]
)
# Gas Tariff
a_gas_tarif = fx.Source(
'Gastarif',
- source=fx.Flow('Q_Gas', bus='Gas', size=1000, effects_per_flow_hour={costs.label: gas_price, CO2.label: 0.3}),
+ outputs=[
+ fx.Flow('Q_Gas', bus='Gas', size=1000, effects_per_flow_hour={costs.label: gas_price, CO2.label: 0.3})
+ ],
)
# Coal Tariff
a_kohle_tarif = fx.Source(
'Kohletarif',
- source=fx.Flow('Q_Kohle', bus='Kohle', size=1000, effects_per_flow_hour={costs.label: 4.6, CO2.label: 0.3}),
+ outputs=[fx.Flow('Q_Kohle', bus='Kohle', size=1000, effects_per_flow_hour={costs.label: 4.6, CO2.label: 0.3})],
)
# Electricity Tariff and Feed-in
a_strom_einspeisung = fx.Sink(
- 'Einspeisung', sink=fx.Flow('P_el', bus='Strom', size=1000, effects_per_flow_hour=TS_electricity_price_sell)
+ 'Einspeisung', inputs=[fx.Flow('P_el', bus='Strom', size=1000, effects_per_flow_hour=TS_electricity_price_sell)]
)
a_strom_tarif = fx.Source(
'Stromtarif',
- source=fx.Flow(
- 'P_el', bus='Strom', size=1000, effects_per_flow_hour={costs.label: TS_electricity_price_buy, CO2: 0.3}
- ),
+ outputs=[
+ fx.Flow(
+ 'P_el',
+ bus='Strom',
+ size=1000,
+ effects_per_flow_hour={costs.label: TS_electricity_price_buy, CO2.label: 0.3},
+ )
+ ],
)
# Flow System Setup
@@ -161,12 +173,12 @@
if full:
calculation = fx.FullCalculation('Full', flow_system)
calculation.do_modeling()
- calculation.solve(fx.solvers.HighsSolver(0, 60))
+ calculation.solve(fx.solvers.HighsSolver(0.01 / 100, 60))
calculations.append(calculation)
if segmented:
calculation = fx.SegmentedCalculation('Segmented', flow_system, segment_length, overlap_length)
- calculation.do_modeling_and_solve(fx.solvers.HighsSolver(0, 60))
+ calculation.do_modeling_and_solve(fx.solvers.HighsSolver(0.01 / 100, 60))
calculations.append(calculation)
if aggregated:
@@ -175,7 +187,7 @@
aggregation_parameters.time_series_for_low_peaks = [TS_electricity_demand, TS_heat_demand]
calculation = fx.AggregatedCalculation('Aggregated', flow_system, aggregation_parameters)
calculation.do_modeling()
- calculation.solve(fx.solvers.HighsSolver(0, 60))
+ calculation.solve(fx.solvers.HighsSolver(0.01 / 100, 60))
calculations.append(calculation)
# Get solutions for plotting for different calculations
@@ -191,34 +203,35 @@ def get_solutions(calcs: list, variable: str) -> xr.Dataset:
# --- Plotting for comparison ---
fx.plotting.with_plotly(
get_solutions(calculations, 'Speicher|charge_state').to_dataframe(),
- mode='line',
+ style='line',
title='Charge State Comparison',
ylabel='Charge state',
).write_html('results/Charge State.html')
fx.plotting.with_plotly(
get_solutions(calculations, 'BHKW2(Q_th)|flow_rate').to_dataframe(),
- mode='line',
+ style='line',
title='BHKW2(Q_th) Flow Rate Comparison',
ylabel='Flow rate',
).write_html('results/BHKW2 Thermal Power.html')
fx.plotting.with_plotly(
- get_solutions(calculations, 'costs(operation)|total_per_timestep').to_dataframe(),
- mode='line',
+ get_solutions(calculations, 'costs(temporal)|per_timestep').to_dataframe(),
+ style='line',
title='Operation Cost Comparison',
ylabel='Costs [€]',
).write_html('results/Operation Costs.html')
fx.plotting.with_plotly(
- pd.DataFrame(get_solutions(calculations, 'costs(operation)|total_per_timestep').to_dataframe().sum()).T,
- mode='bar',
+ pd.DataFrame(get_solutions(calculations, 'costs(temporal)|per_timestep').to_dataframe().sum()).T,
+ style='stacked_bar',
title='Total Cost Comparison',
ylabel='Costs [€]',
).update_layout(barmode='group').write_html('results/Total Costs.html')
fx.plotting.with_plotly(
- pd.DataFrame([calc.durations for calc in calculations], index=[calc.name for calc in calculations]), 'bar'
+ pd.DataFrame([calc.durations for calc in calculations], index=[calc.name for calc in calculations]),
+ 'stacked_bar',
).update_layout(title='Duration Comparison', xaxis_title='Calculation type', yaxis_title='Time (s)').write_html(
'results/Speed Comparison.html'
)
diff --git a/examples/04_Scenarios/scenario_example.py b/examples/04_Scenarios/scenario_example.py
new file mode 100644
index 000000000..f06760603
--- /dev/null
+++ b/examples/04_Scenarios/scenario_example.py
@@ -0,0 +1,144 @@
+"""
+This script shows how to use the flixopt framework to model a simple energy system.
+"""
+
+import numpy as np
+import pandas as pd
+
+import flixopt as fx
+
+if __name__ == '__main__':
+ # Create datetime array starting from '2020-01-01' for the given time period
+ timesteps = pd.date_range('2020-01-01', periods=9, freq='h')
+ scenarios = pd.Index(['Base Case', 'High Demand'])
+ periods = pd.Index([2020, 2021, 2022])
+
+ # --- Create Time Series Data ---
+ # Heat demand profile (e.g., kW) over time and corresponding power prices
+ heat_demand_per_h = pd.DataFrame(
+ {'Base Case': [30, 0, 90, 110, 110, 20, 20, 20, 20], 'High Demand': [30, 0, 100, 118, 125, 20, 20, 20, 20]},
+ index=timesteps,
+ )
+ power_prices = np.array([0.08, 0.09, 0.10])
+
+ flow_system = fx.FlowSystem(timesteps=timesteps, periods=periods, scenarios=scenarios, weights=np.array([0.5, 0.6]))
+
+ # --- Define Energy Buses ---
+ # These represent nodes, where the used medias are balanced (electricity, heat, and gas)
+ flow_system.add_elements(fx.Bus(label='Strom'), fx.Bus(label='Fernwärme'), fx.Bus(label='Gas'))
+
+ # --- Define Effects (Objective and CO2 Emissions) ---
+ # Cost effect: used as the optimization objective --> minimizing costs
+ costs = fx.Effect(
+ label='costs',
+ unit='€',
+ description='Kosten',
+ is_standard=True, # standard effect: no explicit value needed for costs
+ is_objective=True, # Minimizing costs as the optimization objective
+ share_from_temporal={'CO2': 0.2},
+ )
+
+ # CO2 emissions effect with an associated cost impact
+ CO2 = fx.Effect(
+ label='CO2',
+ unit='kg',
+ description='CO2_e-Emissionen',
+ maximum_per_hour=1000, # Max CO2 emissions per hour
+ )
+
+ # --- Define Flow System Components ---
+ # Boiler: Converts fuel (gas) into thermal energy (heat)
+ boiler = fx.linear_converters.Boiler(
+ label='Boiler',
+ eta=0.5,
+ Q_th=fx.Flow(
+ label='Q_th',
+ bus='Fernwärme',
+ size=50,
+ relative_minimum=0.1,
+ relative_maximum=1,
+ on_off_parameters=fx.OnOffParameters(),
+ ),
+ Q_fu=fx.Flow(label='Q_fu', bus='Gas'),
+ )
+
+ # Combined Heat and Power (CHP): Generates both electricity and heat from fuel
+ chp = fx.linear_converters.CHP(
+ label='CHP',
+ eta_th=0.5,
+ eta_el=0.4,
+ P_el=fx.Flow('P_el', bus='Strom', size=60, relative_minimum=5 / 60, on_off_parameters=fx.OnOffParameters()),
+ Q_th=fx.Flow('Q_th', bus='Fernwärme'),
+ Q_fu=fx.Flow('Q_fu', bus='Gas'),
+ )
+
+ # Storage: Energy storage system with charging and discharging capabilities
+ storage = fx.Storage(
+ label='Storage',
+ charging=fx.Flow('Q_th_load', bus='Fernwärme', size=1000),
+ discharging=fx.Flow('Q_th_unload', bus='Fernwärme', size=1000),
+ capacity_in_flow_hours=fx.InvestParameters(effects_of_investment=20, fixed_size=30, mandatory=True),
+ initial_charge_state=0, # Initial storage state: empty
+ relative_maximum_charge_state=np.array([80, 70, 80, 80, 80, 80, 80, 80, 80]) * 0.01,
+ relative_maximum_final_charge_state=0.8,
+ eta_charge=0.9,
+ eta_discharge=1, # Efficiency factors for charging/discharging
+ relative_loss_per_hour=0.08, # 8% loss per hour. Absolute loss depends on current charge state
+ prevent_simultaneous_charge_and_discharge=True, # Prevent charging and discharging at the same time
+ )
+
+ # Heat Demand Sink: Represents a fixed heat demand profile
+ heat_sink = fx.Sink(
+ label='Heat Demand',
+ inputs=[fx.Flow(label='Q_th_Last', bus='Fernwärme', size=1, fixed_relative_profile=heat_demand_per_h)],
+ )
+
+ # Gas Source: Gas tariff source with associated costs and CO2 emissions
+ gas_source = fx.Source(
+ label='Gastarif',
+ outputs=[
+ fx.Flow(label='Q_Gas', bus='Gas', size=1000, effects_per_flow_hour={costs.label: 0.04, CO2.label: 0.3})
+ ],
+ )
+
+ # Power Sink: Represents the export of electricity to the grid
+ power_sink = fx.Sink(
+ label='Einspeisung', inputs=[fx.Flow(label='P_el', bus='Strom', effects_per_flow_hour=-1 * power_prices)]
+ )
+
+ # --- Build the Flow System ---
+ # Add all defined components and effects to the flow system
+ flow_system.add_elements(costs, CO2, boiler, storage, chp, heat_sink, gas_source, power_sink)
+
+ # Visualize the flow system for validation purposes
+ flow_system.plot_network(show=True)
+
+ # --- Define and Run Calculation ---
+ # Create a calculation object to model the Flow System
+ calculation = fx.FullCalculation(name='Sim1', flow_system=flow_system)
+ calculation.do_modeling() # Translate the model to a solvable form, creating equations and Variables
+
+ # --- Solve the Calculation and Save Results ---
+ calculation.solve(fx.solvers.HighsSolver(mip_gap=0, time_limit_seconds=30))
+
+ calculation.results.plot_heatmap('CHP(Q_th)|flow_rate')
+
+ # --- Analyze Results ---
+ calculation.results['Fernwärme'].plot_node_balance_pie()
+ calculation.results['Fernwärme'].plot_node_balance(style='stacked_bar')
+ calculation.results['Storage'].plot_node_balance()
+ calculation.results.plot_heatmap('CHP(Q_th)|flow_rate')
+
+ # Convert the results for the storage component to a dataframe and display
+ df = calculation.results['Storage'].node_balance_with_charge_state()
+ print(df)
+
+ # Plot charge state using matplotlib
+ fig, ax = calculation.results['Storage'].plot_charge_state(engine='matplotlib')
+ # Customize the plot further if needed
+ ax.set_title('Storage Charge State Over Time')
+ # Or save the figure
+ # fig.savefig('storage_charge_state.png')
+
+ # Save results to file for later usage
+ calculation.results.to_file()
diff --git a/examples/05_Two-stage-optimization/two_stage_optimization.py b/examples/05_Two-stage-optimization/two_stage_optimization.py
new file mode 100644
index 000000000..b6072a3c2
--- /dev/null
+++ b/examples/05_Two-stage-optimization/two_stage_optimization.py
@@ -0,0 +1,179 @@
+"""
+This script demonstrates how to use downsampling of a FlowSystem to effectively reduce the size of a model.
+This can be very useful when working with large models or during development,
+as it can drastically reduce the computational time.
+This leads to faster results and easier debugging.
+A common use case is to optimize the investments of a model with a downsampled version of the original model, and then fix the computed sizes when calculating the actual dispatch.
+While the final optimum might differ from the global optimum, the solving will be much faster.
+"""
+
+import logging
+import pathlib
+import timeit
+
+import pandas as pd
+import xarray as xr
+
+import flixopt as fx
+
+logger = logging.getLogger('flixopt')
+
+if __name__ == '__main__':
+ # Data Import
+ data_import = pd.read_csv(
+ pathlib.Path(__file__).parent.parent / 'resources' / 'Zeitreihen2020.csv', index_col=0
+ ).sort_index()
+ filtered_data = data_import[:500]
+
+ filtered_data.index = pd.to_datetime(filtered_data.index)
+ timesteps = filtered_data.index
+
+ # Access specific columns and convert to 1D-numpy array
+ electricity_demand = filtered_data['P_Netz/MW'].to_numpy()
+ heat_demand = filtered_data['Q_Netz/MW'].to_numpy()
+ electricity_price = filtered_data['Strompr.€/MWh'].to_numpy()
+ gas_price = filtered_data['Gaspr.€/MWh'].to_numpy()
+
+ flow_system = fx.FlowSystem(timesteps)
+ flow_system.add_elements(
+ fx.Bus('Strom'),
+ fx.Bus('Fernwärme'),
+ fx.Bus('Gas'),
+ fx.Bus('Kohle'),
+ fx.Effect('costs', '€', 'Kosten', is_standard=True, is_objective=True),
+ fx.Effect('CO2', 'kg', 'CO2_e-Emissionen'),
+ fx.Effect('PE', 'kWh_PE', 'Primärenergie'),
+ fx.linear_converters.Boiler(
+ 'Kessel',
+ eta=0.85,
+ Q_th=fx.Flow(label='Q_th', bus='Fernwärme'),
+ Q_fu=fx.Flow(
+ label='Q_fu',
+ bus='Gas',
+ size=fx.InvestParameters(
+ effects_of_investment_per_size={'costs': 1_000}, minimum_size=10, maximum_size=500
+ ),
+ relative_minimum=0.2,
+ previous_flow_rate=20,
+ on_off_parameters=fx.OnOffParameters(effects_per_switch_on=300),
+ ),
+ ),
+ fx.linear_converters.CHP(
+ 'BHKW2',
+ eta_th=0.58,
+ eta_el=0.22,
+ on_off_parameters=fx.OnOffParameters(
+ effects_per_switch_on=1_000, consecutive_on_hours_min=10, consecutive_off_hours_min=10
+ ),
+ P_el=fx.Flow('P_el', bus='Strom'),
+ Q_th=fx.Flow('Q_th', bus='Fernwärme'),
+ Q_fu=fx.Flow(
+ 'Q_fu',
+ bus='Kohle',
+ size=fx.InvestParameters(
+ effects_of_investment_per_size={'costs': 3_000}, minimum_size=10, maximum_size=500
+ ),
+ relative_minimum=0.3,
+ previous_flow_rate=100,
+ ),
+ ),
+ fx.Storage(
+ 'Speicher',
+ capacity_in_flow_hours=fx.InvestParameters(
+ minimum_size=10, maximum_size=1000, effects_of_investment_per_size={'costs': 60}
+ ),
+ initial_charge_state='lastValueOfSim',
+ eta_charge=1,
+ eta_discharge=1,
+ relative_loss_per_hour=0.001,
+ prevent_simultaneous_charge_and_discharge=True,
+ charging=fx.Flow('Q_th_load', size=137, bus='Fernwärme'),
+ discharging=fx.Flow('Q_th_unload', size=158, bus='Fernwärme'),
+ ),
+ fx.Sink(
+ 'Wärmelast', inputs=[fx.Flow('Q_th_Last', bus='Fernwärme', size=1, fixed_relative_profile=heat_demand)]
+ ),
+ fx.Source(
+ 'Gastarif',
+ outputs=[fx.Flow('Q_Gas', bus='Gas', size=1000, effects_per_flow_hour={'costs': gas_price, 'CO2': 0.3})],
+ ),
+ fx.Source(
+ 'Kohletarif',
+ outputs=[fx.Flow('Q_Kohle', bus='Kohle', size=1000, effects_per_flow_hour={'costs': 4.6, 'CO2': 0.3})],
+ ),
+ fx.Source(
+ 'Einspeisung',
+ outputs=[
+ fx.Flow(
+ 'P_el', bus='Strom', size=1000, effects_per_flow_hour={'costs': electricity_price + 0.5, 'CO2': 0.3}
+ )
+ ],
+ ),
+ fx.Sink(
+ 'Stromlast',
+ inputs=[fx.Flow('P_el_Last', bus='Strom', size=1, fixed_relative_profile=electricity_demand)],
+ ),
+ fx.Source(
+ 'Stromtarif',
+ outputs=[
+ fx.Flow('P_el', bus='Strom', size=1000, effects_per_flow_hour={'costs': electricity_price, 'CO2': 0.3})
+ ],
+ ),
+ )
+
+ # Separate optimization of flow sizes and dispatch
+ start = timeit.default_timer()
+ calculation_sizing = fx.FullCalculation('Sizing', flow_system.resample('4h'))
+ calculation_sizing.do_modeling()
+ calculation_sizing.solve(fx.solvers.HighsSolver(0.1 / 100, 600))
+ timer_sizing = timeit.default_timer() - start
+
+ start = timeit.default_timer()
+ calculation_dispatch = fx.FullCalculation('Dispatch', flow_system)
+ calculation_dispatch.do_modeling()
+ calculation_dispatch.fix_sizes(calculation_sizing.results.solution)
+ calculation_dispatch.solve(fx.solvers.HighsSolver(0.1 / 100, 600))
+ timer_dispatch = timeit.default_timer() - start
+
+ if (calculation_dispatch.results.sizes().round(5) == calculation_sizing.results.sizes().round(5)).all().item():
+ logger.info('Sizes were correctly equalized')
+ else:
+ raise RuntimeError('Sizes were not correctly equalized')
+
+ # Optimization of both flow sizes and dispatch together
+ start = timeit.default_timer()
+ calculation_combined = fx.FullCalculation('Combined', flow_system)
+ calculation_combined.do_modeling()
+ calculation_combined.solve(fx.solvers.HighsSolver(0.1 / 100, 600))
+ timer_combined = timeit.default_timer() - start
+
+ # Comparison of results
+ comparison = xr.concat(
+ [calculation_combined.results.solution, calculation_dispatch.results.solution], dim='mode'
+ ).assign_coords(mode=['Combined', 'Two-stage'])
+ comparison['Duration [s]'] = xr.DataArray([timer_combined, timer_sizing + timer_dispatch], dims='mode')
+
+ comparison_main = comparison[
+ [
+ 'Duration [s]',
+ 'costs',
+ 'costs(periodic)',
+ 'costs(temporal)',
+ 'BHKW2(Q_fu)|size',
+ 'Kessel(Q_fu)|size',
+ 'Speicher|size',
+ ]
+ ]
+ comparison_main = xr.concat(
+ [
+ comparison_main,
+ (
+ (comparison_main.sel(mode='Two-stage') - comparison_main.sel(mode='Combined'))
+ / comparison_main.sel(mode='Combined')
+ * 100
+ ).assign_coords(mode='Diff [%]'),
+ ],
+ dim='mode',
+ )
+
+ print(comparison_main.to_pandas().T.round(2))
diff --git a/examples/03_Calculation_types/Zeitreihen2020.csv b/examples/resources/Zeitreihen2020.csv
similarity index 100%
rename from examples/03_Calculation_types/Zeitreihen2020.csv
rename to examples/resources/Zeitreihen2020.csv
diff --git a/flixopt/__init__.py b/flixopt/__init__.py
index d8ad05f19..8fc4e4851 100644
--- a/flixopt/__init__.py
+++ b/flixopt/__init__.py
@@ -2,9 +2,14 @@
This module bundles all common functionality of flixopt and sets up the logging
"""
-from importlib.metadata import version
+import warnings
+from importlib.metadata import PackageNotFoundError, version
-__version__ = version('flixopt')
+try:
+ __version__ = version('flixopt')
+except PackageNotFoundError:
+ # Package is not installed (development mode without editable install)
+ __version__ = '0.0.0.dev0'
from .commons import (
CONFIG,
@@ -35,3 +40,30 @@
results,
solvers,
)
+
+# === Runtime warning suppression for third-party libraries ===
+# These warnings are from dependencies and cannot be fixed by end users.
+# They are suppressed at runtime to provide a cleaner user experience.
+# These filters match the test configuration in pyproject.toml for consistency.
+
+# tsam: Time series aggregation library
+# - UserWarning: Informational message about minimal value constraints during clustering.
+warnings.filterwarnings('ignore', category=UserWarning, message='.*minimal value.*exceeds.*', module='tsam')
+# TODO: Might be able to fix it in flixopt?
+
+# linopy: Linear optimization library
+# - UserWarning: Coordinate mismatch warnings that don't affect functionality and are expected.
+warnings.filterwarnings(
+ 'ignore', category=UserWarning, message='Coordinates across variables not equal', module='linopy'
+)
+# - FutureWarning: join parameter default will change in future versions
+warnings.filterwarnings(
+ 'ignore',
+ category=FutureWarning,
+ message="In a future version of xarray the default value for join will change from join='outer' to join='exact'",
+ module='linopy',
+)
+
+# numpy: Core numerical library
+# - RuntimeWarning: Binary incompatibility warnings from compiled extensions (safe to ignore). numpy 1->2
+warnings.filterwarnings('ignore', category=RuntimeWarning, message='numpy\\.ndarray size changed')
diff --git a/flixopt/aggregation.py b/flixopt/aggregation.py
index 4e6c3892e..91ef618a9 100644
--- a/flixopt/aggregation.py
+++ b/flixopt/aggregation.py
@@ -22,8 +22,8 @@
from .components import Storage
from .structure import (
- Model,
- SystemModel,
+ FlowSystemModel,
+ Submodel,
)
if TYPE_CHECKING:
@@ -274,25 +274,25 @@ def use_extreme_periods(self):
@property
def labels_for_high_peaks(self) -> list[str]:
- return [ts.label for ts in self.time_series_for_high_peaks]
+ return [ts.name for ts in self.time_series_for_high_peaks]
@property
def labels_for_low_peaks(self) -> list[str]:
- return [ts.label for ts in self.time_series_for_low_peaks]
+ return [ts.name for ts in self.time_series_for_low_peaks]
@property
def use_low_peaks(self) -> bool:
return bool(self.time_series_for_low_peaks)
-class AggregationModel(Model):
- """The AggregationModel holds equations and variables related to the Aggregation of a FLowSystem.
+class AggregationModel(Submodel):
+ """The AggregationModel holds equations and variables related to the Aggregation of a FlowSystem.
It creates Equations that equates indices of variables, and introduces penalties related to binary variables, that
escape the equation to their related binaries in other periods"""
def __init__(
self,
- model: SystemModel,
+ model: FlowSystemModel,
aggregation_parameters: AggregationParameters,
flow_system: FlowSystem,
aggregation_data: Aggregation,
@@ -301,7 +301,7 @@ def __init__(
"""
Modeling-Element for "index-equating"-equations
"""
- super().__init__(model, label_of_element='Aggregation', label_full='Aggregation')
+ super().__init__(model, label_of_element='Aggregation', label_of_model='Aggregation')
self.flow_system = flow_system
self.aggregation_parameters = aggregation_parameters
self.aggregation_data = aggregation_data
@@ -315,22 +315,24 @@ def do_modeling(self):
indices = self.aggregation_data.get_equation_indices(skip_first_index_of_period=True)
- time_variables: set[str] = {k for k, v in self._model.variables.data.items() if 'time' in v.indexes}
- binary_variables: set[str] = {k for k, v in self._model.variables.data.items() if k in self._model.binaries}
+ time_variables: set[str] = {
+ name for name in self._model.variables if 'time' in self._model.variables[name].dims
+ }
+ binary_variables: set[str] = set(self._model.variables.binaries)
binary_time_variables: set[str] = time_variables & binary_variables
for component in components:
if isinstance(component, Storage) and not self.aggregation_parameters.fix_storage_flows:
continue # Fix Nothing in The Storage
- all_variables_of_component = set(component.model.variables)
+ all_variables_of_component = set(component.submodel.variables)
if self.aggregation_parameters.aggregate_data_and_fix_non_binary_vars:
- relevant_variables = component.model.variables[all_variables_of_component & time_variables]
+ relevant_variables = component.submodel.variables[all_variables_of_component & time_variables]
else:
- relevant_variables = component.model.variables[all_variables_of_component & binary_time_variables]
+ relevant_variables = component.submodel.variables[all_variables_of_component & binary_time_variables]
for variable in relevant_variables:
- self._equate_indices(component.model.variables[variable], indices)
+ self._equate_indices(component.submodel.variables[variable], indices)
penalty = self.aggregation_parameters.penalty_of_period_freedom
if (self.aggregation_parameters.percentage_of_period_freedom > 0) and penalty != 0:
@@ -343,12 +345,9 @@ def _equate_indices(self, variable: linopy.Variable, indices: tuple[np.ndarray,
# Gleichung:
# eq1: x(p1,t) - x(p3,t) = 0 # wobei p1 und p3 im gleichen Cluster sind und t = 0..N_p
- con = self.add(
- self._model.add_constraints(
- variable.isel(time=indices[0]) - variable.isel(time=indices[1]) == 0,
- name=f'{self.label_full}|equate_indices|{variable.name}',
- ),
- f'equate_indices|{variable.name}',
+ con = self.add_constraints(
+ variable.isel(time=indices[0]) - variable.isel(time=indices[1]) == 0,
+ short_name=f'equate_indices|{variable.name}',
)
# Korrektur: (bisher nur für Binärvariablen:)
@@ -356,23 +355,11 @@ def _equate_indices(self, variable: linopy.Variable, indices: tuple[np.ndarray,
variable.name in self._model.variables.binaries
and self.aggregation_parameters.percentage_of_period_freedom > 0
):
- var_k1 = self.add(
- self._model.add_variables(
- binary=True,
- coords={'time': variable.isel(time=indices[0]).indexes['time']},
- name=f'{self.label_full}|correction1|{variable.name}',
- ),
- f'correction1|{variable.name}',
- )
+ sel = variable.isel(time=indices[0])
+ coords = {d: sel.indexes[d] for d in sel.dims}
+ var_k1 = self.add_variables(binary=True, coords=coords, short_name=f'correction1|{variable.name}')
- var_k0 = self.add(
- self._model.add_variables(
- binary=True,
- coords={'time': variable.isel(time=indices[0]).indexes['time']},
- name=f'{self.label_full}|correction0|{variable.name}',
- ),
- f'correction0|{variable.name}',
- )
+ var_k0 = self.add_variables(binary=True, coords=coords, short_name=f'correction0|{variable.name}')
# equation extends ...
# --> On(p3) can be 0/1 independent of On(p1,t)!
@@ -383,21 +370,13 @@ def _equate_indices(self, variable: linopy.Variable, indices: tuple[np.ndarray,
con.lhs += 1 * var_k1 - 1 * var_k0
# interlock var_k1 and var_K2:
- # eq: var_k0(t)+var_k1(t) <= 1.1
- self.add(
- self._model.add_constraints(
- var_k0 + var_k1 <= 1.1, name=f'{self.label_full}|lock_k0_and_k1|{variable.name}'
- ),
- f'lock_k0_and_k1|{variable.name}',
- )
+ # eq: var_k0(t)+var_k1(t) <= 1
+ self.add_constraints(var_k0 + var_k1 <= 1, short_name=f'lock_k0_and_k1|{variable.name}')
# Begrenzung der Korrektur-Anzahl:
# eq: sum(K) <= n_Corr_max
- self.add(
- self._model.add_constraints(
- sum(var_k0) + sum(var_k1)
- <= round(self.aggregation_parameters.percentage_of_period_freedom / 100 * length),
- name=f'{self.label_full}|limit_corrections|{variable.name}',
- ),
- f'limit_corrections|{variable.name}',
+ limit = int(np.floor(self.aggregation_parameters.percentage_of_period_freedom / 100 * length))
+ self.add_constraints(
+ var_k0.sum(dim='time') + var_k1.sum(dim='time') <= limit,
+ short_name=f'limit_corrections|{variable.name}',
)
diff --git a/flixopt/calculation.py b/flixopt/calculation.py
index 4dc13889c..9e35b2dee 100644
--- a/flixopt/calculation.py
+++ b/flixopt/calculation.py
@@ -1,11 +1,11 @@
"""
This module contains the Calculation functionality for the flixopt framework.
-It is used to calculate a SystemModel for a given FlowSystem through a solver.
+It is used to calculate a FlowSystemModel for a given FlowSystem through a solver.
There are three different Calculation types:
- 1. FullCalculation: Calculates the SystemModel for the full FlowSystem
- 2. AggregatedCalculation: Calculates the SystemModel for the full FlowSystem, but aggregates the TimeSeriesData.
+ 1. FullCalculation: Calculates the FlowSystemModel for the full FlowSystem
+ 2. AggregatedCalculation: Calculates the FlowSystemModel for the full FlowSystem, but aggregates the TimeSeriesData.
This simplifies the mathematical model and usually speeds up the solving process.
- 3. SegmentedCalculation: Solves a SystemModel for each individual Segment of the FlowSystem.
+ 3. SegmentedCalculation: Solves a FlowSystemModel for each individual Segment of the FlowSystem.
"""
from __future__ import annotations
@@ -14,7 +14,9 @@
import math
import pathlib
import timeit
-from typing import TYPE_CHECKING, Any
+import warnings
+from collections import Counter
+from typing import TYPE_CHECKING, Annotated, Any
import numpy as np
import yaml
@@ -24,17 +26,18 @@
from .aggregation import AggregationModel, AggregationParameters
from .components import Storage
from .config import CONFIG
+from .core import DataConverter, Scalar, TimeSeriesData, drop_constant_arrays
from .features import InvestmentModel
+from .flow_system import FlowSystem
from .results import CalculationResults, SegmentedCalculationResults
if TYPE_CHECKING:
import pandas as pd
+ import xarray as xr
- from .core import Scalar
from .elements import Component
- from .flow_system import FlowSystem
from .solvers import _Solver
- from .structure import SystemModel
+ from .structure import FlowSystemModel
logger = logging.getLogger('flixopt')
@@ -42,26 +45,50 @@
class Calculation:
"""
class for defined way of solving a flow_system optimization
+
+ Args:
+ name: name of calculation
+ flow_system: flow_system which should be calculated
+ folder: folder where results should be saved. If None, then the current working directory is used.
+ normalize_weights: Whether to automatically normalize the weights (periods and scenarios) to sum up to 1 when solving.
+ active_timesteps: Deprecated. Use FlowSystem.sel(time=...) or FlowSystem.isel(time=...) instead.
"""
def __init__(
self,
name: str,
flow_system: FlowSystem,
- active_timesteps: pd.DatetimeIndex | None = None,
+ active_timesteps: Annotated[
+ pd.DatetimeIndex | None,
+ 'DEPRECATED: Use flow_system.sel(time=...) or flow_system.isel(time=...) instead',
+ ] = None,
folder: pathlib.Path | None = None,
+ normalize_weights: bool = True,
):
- """
- Args:
- name: name of calculation
- flow_system: flow_system which should be calculated
- active_timesteps: list with indices, which should be used for calculation. If None, then all timesteps are used.
- folder: folder where results should be saved. If None, then the current working directory is used.
- """
self.name = name
+ if flow_system.used_in_calculation:
+ logger.warning(
+ f'This FlowSystem is already used in a calculation:\n{flow_system}\n'
+ f'Creating a copy of the FlowSystem for Calculation "{self.name}".'
+ )
+ flow_system = flow_system.copy()
+
+ if active_timesteps is not None:
+ warnings.warn(
+ "The 'active_timesteps' parameter is deprecated and will be removed in a future version. "
+ 'Use flow_system.sel(time=timesteps) or flow_system.isel(time=indices) before passing '
+ 'the FlowSystem to the Calculation instead.',
+ DeprecationWarning,
+ stacklevel=2,
+ )
+ flow_system = flow_system.sel(time=active_timesteps)
+ self._active_timesteps = active_timesteps # deprecated
+ self.normalize_weights = normalize_weights
+
+ flow_system._used_in_calculation = True
+
self.flow_system = flow_system
- self.model: SystemModel | None = None
- self.active_timesteps = active_timesteps
+ self.model: FlowSystemModel | None = None
self.durations = {'modeling': 0.0, 'solving': 0.0, 'saving': 0.0}
self.folder = pathlib.Path.cwd() / 'results' if folder is None else pathlib.Path(folder)
@@ -71,56 +98,59 @@ def __init__(
raise NotADirectoryError(f'Path {self.folder} exists and is not a directory.')
self.folder.mkdir(parents=False, exist_ok=True)
+ self._modeled = False
+
@property
def main_results(self) -> dict[str, Scalar | dict]:
from flixopt.features import InvestmentModel
- return {
+ main_results = {
'Objective': self.model.objective.value,
- 'Penalty': float(self.model.effects.penalty.total.solution.values),
+ 'Penalty': self.model.effects.penalty.total.solution.values,
'Effects': {
f'{effect.label} [{effect.unit}]': {
- 'operation': float(effect.model.operation.total.solution.values),
- 'invest': float(effect.model.invest.total.solution.values),
- 'total': float(effect.model.total.solution.values),
+ 'temporal': effect.submodel.temporal.total.solution.values,
+ 'periodic': effect.submodel.periodic.total.solution.values,
+ 'total': effect.submodel.total.solution.values,
}
for effect in self.flow_system.effects
},
'Invest-Decisions': {
'Invested': {
- model.label_of_element: float(model.size.solution)
+ model.label_of_element: model.size.solution
for component in self.flow_system.components.values()
- for model in component.model.all_sub_models
- if isinstance(model, InvestmentModel) and float(model.size.solution) >= CONFIG.Modeling.epsilon
+ for model in component.submodel.all_submodels
+ if isinstance(model, InvestmentModel) and model.size.solution.max() >= CONFIG.Modeling.epsilon
},
'Not invested': {
- model.label_of_element: float(model.size.solution)
+ model.label_of_element: model.size.solution
for component in self.flow_system.components.values()
- for model in component.model.all_sub_models
- if isinstance(model, InvestmentModel) and float(model.size.solution) < CONFIG.Modeling.epsilon
+ for model in component.submodel.all_submodels
+ if isinstance(model, InvestmentModel) and model.size.solution.max() < CONFIG.Modeling.epsilon
},
},
'Buses with excess': [
{
bus.label_full: {
- 'input': float(np.sum(bus.model.excess_input.solution.values)),
- 'output': float(np.sum(bus.model.excess_output.solution.values)),
+ 'input': bus.submodel.excess_input.solution.sum('time'),
+ 'output': bus.submodel.excess_output.solution.sum('time'),
}
}
for bus in self.flow_system.buses.values()
if bus.with_excess
and (
- float(np.sum(bus.model.excess_input.solution.values)) > 1e-3
- or float(np.sum(bus.model.excess_output.solution.values)) > 1e-3
+ bus.submodel.excess_input.solution.sum() > 1e-3 or bus.submodel.excess_output.solution.sum() > 1e-3
)
],
}
+ return utils.round_nested_floats(main_results)
+
@property
def summary(self):
return {
'Name': self.name,
- 'Number of timesteps': len(self.flow_system.time_series_collection.timesteps),
+ 'Number of timesteps': len(self.flow_system.timesteps),
'Calculation Type': self.__class__.__name__,
'Constraints': self.model.constraints.ncons,
'Variables': self.model.variables.nvars,
@@ -129,6 +159,19 @@ def summary(self):
'Config': CONFIG.to_dict(),
}
+ @property
+ def active_timesteps(self) -> pd.DatetimeIndex:
+ warnings.warn(
+ 'active_timesteps is deprecated. Use flow_system.sel(time=...) or flow_system.isel(time=...) instead.',
+ DeprecationWarning,
+ stacklevel=2,
+ )
+ return self._active_timesteps
+
+ @property
+ def modeled(self) -> bool:
+ return True if self.model is not None else False
+
class FullCalculation(Calculation):
"""
@@ -136,19 +179,55 @@ class FullCalculation(Calculation):
This is the most comprehensive calculation type that considers every time step
in the optimization, providing the most accurate but computationally intensive solution.
+
+ Args:
+ name: name of calculation
+ flow_system: flow_system which should be calculated
+ folder: folder where results should be saved. If None, then the current working directory is used.
+ normalize_weights: Whether to automatically normalize the weights (periods and scenarios) to sum up to 1 when solving.
+ active_timesteps: Deprecated. Use FlowSystem.sel(time=...) or FlowSystem.isel(time=...) instead.
"""
- def do_modeling(self) -> SystemModel:
+ def do_modeling(self) -> FullCalculation:
t_start = timeit.default_timer()
- self._activate_time_series()
+ self.flow_system.connect_and_transform()
- self.model = self.flow_system.create_model()
+ self.model = self.flow_system.create_model(self.normalize_weights)
self.model.do_modeling()
self.durations['modeling'] = round(timeit.default_timer() - t_start, 2)
- return self.model
+ return self
+
+ def fix_sizes(self, ds: xr.Dataset, decimal_rounding: int | None = 5) -> FullCalculation:
+ """Fix the sizes of the calculations to specified values.
- def solve(self, solver: _Solver, log_file: pathlib.Path | None = None, log_main_results: bool = True):
+ Args:
+ ds: The dataset that contains the variable names mapped to their sizes. If None, the dataset is loaded from the results.
+ decimal_rounding: The number of decimal places to round the sizes to. If no rounding is applied, numerical errors might lead to infeasibility.
+ """
+ if not self.modeled:
+ raise RuntimeError('Model was not created. Call do_modeling() first.')
+ if decimal_rounding is not None:
+ ds = ds.round(decimal_rounding)
+
+ for name, da in ds.data_vars.items():
+ if '|size' not in name:
+ continue
+ if name not in self.model.variables:
+ logger.debug(f'Variable {name} not found in calculation model. Skipping.')
+ continue
+
+ con = self.model.add_constraints(
+ self.model[name] == da,
+ name=f'{name}-fixed',
+ )
+ logger.debug(f'Fixed "{name}":\n{con}')
+
+ return self
+
+ def solve(
+ self, solver: _Solver, log_file: pathlib.Path | None = None, log_main_results: bool = True
+ ) -> FullCalculation:
t_start = timeit.default_timer()
self.model.solve(
@@ -171,11 +250,10 @@ def solve(self, solver: _Solver, log_file: pathlib.Path | None = None, log_main_
# Log the formatted output
if log_main_results:
- logger.info(f'{" Main Results ":#^80}')
logger.info(
- '\n'
+ f'{" Main Results ":#^80}\n'
+ yaml.dump(
- utils.round_floats(self.main_results),
+ utils.round_nested_floats(self.main_results),
default_flow_style=False,
sort_keys=False,
allow_unicode=True,
@@ -185,11 +263,7 @@ def solve(self, solver: _Solver, log_file: pathlib.Path | None = None, log_main_
self.results = CalculationResults.from_calculation(self)
- def _activate_time_series(self):
- self.flow_system.transform_data()
- self.flow_system.time_series_collection.activate_timesteps(
- active_timesteps=self.active_timesteps,
- )
+ return self
class AggregatedCalculation(FullCalculation):
@@ -221,29 +295,34 @@ def __init__(
flow_system: FlowSystem,
aggregation_parameters: AggregationParameters,
components_to_clusterize: list[Component] | None = None,
- active_timesteps: pd.DatetimeIndex | None = None,
+ active_timesteps: Annotated[
+ pd.DatetimeIndex | None,
+ 'DEPRECATED: Use flow_system.sel(time=...) or flow_system.isel(time=...) instead',
+ ] = None,
folder: pathlib.Path | None = None,
):
+ if flow_system.scenarios is not None:
+ raise ValueError('Aggregation is not supported for scenarios yet. Please use FullCalculation instead.')
super().__init__(name, flow_system, active_timesteps, folder=folder)
self.aggregation_parameters = aggregation_parameters
self.components_to_clusterize = components_to_clusterize
self.aggregation = None
- def do_modeling(self) -> SystemModel:
+ def do_modeling(self) -> AggregatedCalculation:
t_start = timeit.default_timer()
- self._activate_time_series()
+ self.flow_system.connect_and_transform()
self._perform_aggregation()
# Model the System
- self.model = self.flow_system.create_model()
+ self.model = self.flow_system.create_model(self.normalize_weights)
self.model.do_modeling()
- # Add Aggregation Model after modeling the rest
+ # Add Aggregation Submodel after modeling the rest
self.aggregation = AggregationModel(
self.model, self.aggregation_parameters, self.flow_system, self.aggregation, self.components_to_clusterize
)
self.aggregation.do_modeling()
self.durations['modeling'] = round(timeit.default_timer() - t_start, 2)
- return self.model
+ return self
def _perform_aggregation(self):
from .aggregation import Aggregation
@@ -251,41 +330,34 @@ def _perform_aggregation(self):
t_start_agg = timeit.default_timer()
# Validation
- dt_min, dt_max = (
- np.min(self.flow_system.time_series_collection.hours_per_timestep),
- np.max(self.flow_system.time_series_collection.hours_per_timestep),
- )
+ dt_min = float(self.flow_system.hours_per_timestep.min().item())
+ dt_max = float(self.flow_system.hours_per_timestep.max().item())
if not dt_min == dt_max:
raise ValueError(
f'Aggregation failed due to inconsistent time step sizes:'
f'delta_t varies from {dt_min} to {dt_max} hours.'
)
- steps_per_period = (
- self.aggregation_parameters.hours_per_period
- / self.flow_system.time_series_collection.hours_per_timestep.max()
- )
- is_integer = (
- self.aggregation_parameters.hours_per_period
- % self.flow_system.time_series_collection.hours_per_timestep.max()
- ).item() == 0
- if not (steps_per_period.size == 1 and is_integer):
+ ratio = self.aggregation_parameters.hours_per_period / dt_max
+ if not np.isclose(ratio, round(ratio), atol=1e-9):
raise ValueError(
f'The selected {self.aggregation_parameters.hours_per_period=} does not match the time '
- f'step size of {dt_min} hours). It must be a multiple of {dt_min} hours.'
+ f'step size of {dt_max} hours. It must be an integer multiple of {dt_max} hours.'
)
logger.info(f'{"":#^80}')
logger.info(f'{" Aggregating TimeSeries Data ":#^80}')
+ ds = self.flow_system.to_dataset()
+
+ temporaly_changing_ds = drop_constant_arrays(ds, dim='time')
+
# Aggregation - creation of aggregated timeseries:
self.aggregation = Aggregation(
- original_data=self.flow_system.time_series_collection.to_dataframe(
- include_extra_timestep=False
- ), # Exclude last row (NaN)
+ original_data=temporaly_changing_ds.to_dataframe(),
hours_per_time_step=float(dt_min),
hours_per_period=self.aggregation_parameters.hours_per_period,
nr_of_periods=self.aggregation_parameters.nr_of_periods,
- weights=self.flow_system.time_series_collection.calculate_aggregation_weights(),
+ weights=self.calculate_aggregation_weights(temporaly_changing_ds),
time_series_for_high_peaks=self.aggregation_parameters.labels_for_high_peaks,
time_series_for_low_peaks=self.aggregation_parameters.labels_for_low_peaks,
)
@@ -293,11 +365,45 @@ def _perform_aggregation(self):
self.aggregation.cluster()
self.aggregation.plot(show=True, save=self.folder / 'aggregation.html')
if self.aggregation_parameters.aggregate_data_and_fix_non_binary_vars:
- self.flow_system.time_series_collection.insert_new_data(
- self.aggregation.aggregated_data, include_extra_timestep=False
- )
+ ds = self.flow_system.to_dataset()
+ for name, series in self.aggregation.aggregated_data.items():
+ da = (
+ DataConverter.to_dataarray(series, self.flow_system.coords)
+ .rename(name)
+ .assign_attrs(ds[name].attrs)
+ )
+ if TimeSeriesData.is_timeseries_data(da):
+ da = TimeSeriesData.from_dataarray(da)
+
+ ds[name] = da
+
+ self.flow_system = FlowSystem.from_dataset(ds)
+ self.flow_system.connect_and_transform()
self.durations['aggregation'] = round(timeit.default_timer() - t_start_agg, 2)
+ @classmethod
+ def calculate_aggregation_weights(cls, ds: xr.Dataset) -> dict[str, float]:
+ """Calculate weights for all datavars in the dataset. Weights are pulled from the attrs of the datavars."""
+
+ groups = [da.attrs['aggregation_group'] for da in ds.data_vars.values() if 'aggregation_group' in da.attrs]
+ group_counts = Counter(groups)
+
+ # Calculate weight for each group (1/count)
+ group_weights = {group: 1 / count for group, count in group_counts.items()}
+
+ weights = {}
+ for name, da in ds.data_vars.items():
+ group_weight = group_weights.get(da.attrs.get('aggregation_group'))
+ if group_weight is not None:
+ weights[name] = group_weight
+ else:
+ weights[name] = da.attrs.get('aggregation_weight', 1)
+
+ if np.all(np.isclose(list(weights.values()), 1, atol=1e-6)):
+ logger.info('All Aggregation weights were set to 1')
+
+ return weights
+
class SegmentedCalculation(Calculation):
"""Solve large optimization problems by dividing time horizon into (overlapping) segments.
@@ -423,20 +529,17 @@ def __init__(
self.nr_of_previous_values = nr_of_previous_values
self.sub_calculations: list[FullCalculation] = []
- self.all_timesteps = self.flow_system.time_series_collection.all_timesteps
- self.all_timesteps_extra = self.flow_system.time_series_collection.all_timesteps_extra
-
self.segment_names = [
f'Segment_{i + 1}' for i in range(math.ceil(len(self.all_timesteps) / self.timesteps_per_segment))
]
- self.active_timesteps_per_segment = self._calculate_timesteps_of_segment()
+ self._timesteps_per_segment = self._calculate_timesteps_per_segment()
assert timesteps_per_segment > 2, 'The Segment length must be greater 2, due to unwanted internal side effects'
assert self.timesteps_per_segment_with_overlap <= len(self.all_timesteps), (
f'{self.timesteps_per_segment_with_overlap=} cant be greater than the total length {len(self.all_timesteps)}'
)
- self.flow_system._connect_network() # Connect network to ensure that all FLows know their Component
+ self.flow_system._connect_network() # Connect network to ensure that all Flows know their Component
# Storing all original start values
self._original_start_values = {
**{flow.label_full: flow.previous_flow_rate for flow in self.flow_system.flows.values()},
@@ -448,104 +551,118 @@ def __init__(
}
self._transfered_start_values: list[dict[str, Any]] = []
- def do_modeling_and_solve(
- self, solver: _Solver, log_file: pathlib.Path | None = None, log_main_results: bool = False
- ):
- logger.info(f'{"":#^80}')
- logger.info(f'{" Segmented Solving ":#^80}')
-
+ def _create_sub_calculations(self):
for i, (segment_name, timesteps_of_segment) in enumerate(
- zip(self.segment_names, self.active_timesteps_per_segment, strict=False)
+ zip(self.segment_names, self._timesteps_per_segment, strict=True)
):
- if self.sub_calculations:
- self._transfer_start_values(i)
+ calc = FullCalculation(f'{self.name}-{segment_name}', self.flow_system.sel(time=timesteps_of_segment))
+ calc.flow_system._connect_network() # Connect to have Correct names of Flows!
+ self.sub_calculations.append(calc)
logger.info(
f'{segment_name} [{i + 1:>2}/{len(self.segment_names):<2}] '
f'({timesteps_of_segment[0]} -> {timesteps_of_segment[-1]}):'
)
- calculation = FullCalculation(
- f'{self.name}-{segment_name}', self.flow_system, active_timesteps=timesteps_of_segment
+ def do_modeling_and_solve(
+ self, solver: _Solver, log_file: pathlib.Path | None = None, log_main_results: bool = False
+ ) -> SegmentedCalculation:
+ logger.info(f'{"":#^80}')
+ logger.info(f'{" Segmented Solving ":#^80}')
+ self._create_sub_calculations()
+
+ for i, calculation in enumerate(self.sub_calculations):
+ logger.info(
+ f'{self.segment_names[i]} [{i + 1:>2}/{len(self.segment_names):<2}] '
+ f'({calculation.flow_system.timesteps[0]} -> {calculation.flow_system.timesteps[-1]}):'
)
- self.sub_calculations.append(calculation)
+
+ if i > 0 and self.nr_of_previous_values > 0:
+ self._transfer_start_values(i)
+
calculation.do_modeling()
- invest_elements = [
- model.label_full
- for component in self.flow_system.components.values()
- for model in component.model.all_sub_models
- if isinstance(model, InvestmentModel)
- ]
- if invest_elements:
- logger.critical(
- f'Investments are not supported in Segmented Calculation! '
- f'Following InvestmentModels were found: {invest_elements}'
- )
+
+ # Warn about Investments, but only in fist run
+ if i == 0:
+ invest_elements = [
+ model.label_full
+ for component in calculation.flow_system.components.values()
+ for model in component.submodel.all_submodels
+ if isinstance(model, InvestmentModel)
+ ]
+ if invest_elements:
+ logger.critical(
+ f'Investments are not supported in Segmented Calculation! '
+ f'Following InvestmentModels were found: {invest_elements}'
+ )
+
calculation.solve(
solver,
log_file=pathlib.Path(log_file) if log_file is not None else self.folder / f'{self.name}.log',
log_main_results=log_main_results,
)
- self._reset_start_values()
-
for calc in self.sub_calculations:
for key, value in calc.durations.items():
self.durations[key] += value
self.results = SegmentedCalculationResults.from_calculation(self)
- def _transfer_start_values(self, segment_index: int):
+ return self
+
+ def _transfer_start_values(self, i: int):
"""
This function gets the last values of the previous solved segment and
inserts them as start values for the next segment
"""
- timesteps_of_prior_segment = self.active_timesteps_per_segment[segment_index - 1]
+ timesteps_of_prior_segment = self.sub_calculations[i - 1].flow_system.timesteps_extra
- start = self.active_timesteps_per_segment[segment_index][0]
+ start = self.sub_calculations[i].flow_system.timesteps[0]
start_previous_values = timesteps_of_prior_segment[self.timesteps_per_segment - self.nr_of_previous_values]
end_previous_values = timesteps_of_prior_segment[self.timesteps_per_segment - 1]
logger.debug(
- f'start of next segment: {start}. indices of previous values: {start_previous_values}:{end_previous_values}'
+ f'Start of next segment: {start}. Indices of previous values: {start_previous_values} -> {end_previous_values}'
)
+ current_flow_system = self.sub_calculations[i - 1].flow_system
+ next_flow_system = self.sub_calculations[i].flow_system
+
start_values_of_this_segment = {}
- for flow in self.flow_system.flows.values():
- flow.previous_flow_rate = flow.model.flow_rate.solution.sel(
+
+ for current_flow in current_flow_system.flows.values():
+ next_flow = next_flow_system.flows[current_flow.label_full]
+ next_flow.previous_flow_rate = current_flow.submodel.flow_rate.solution.sel(
time=slice(start_previous_values, end_previous_values)
).values
- start_values_of_this_segment[flow.label_full] = flow.previous_flow_rate
- for comp in self.flow_system.components.values():
- if isinstance(comp, Storage):
- comp.initial_charge_state = comp.model.charge_state.solution.sel(time=start).item()
- start_values_of_this_segment[comp.label_full] = comp.initial_charge_state
+ start_values_of_this_segment[current_flow.label_full] = next_flow.previous_flow_rate
- self._transfered_start_values.append(start_values_of_this_segment)
+ for current_comp in current_flow_system.components.values():
+ next_comp = next_flow_system.components[current_comp.label_full]
+ if isinstance(next_comp, Storage):
+ next_comp.initial_charge_state = current_comp.submodel.charge_state.solution.sel(time=start).item()
+ start_values_of_this_segment[current_comp.label_full] = next_comp.initial_charge_state
- def _reset_start_values(self):
- """This resets the start values of all Elements to its original state"""
- for flow in self.flow_system.flows.values():
- flow.previous_flow_rate = self._original_start_values[flow.label_full]
- for comp in self.flow_system.components.values():
- if isinstance(comp, Storage):
- comp.initial_charge_state = self._original_start_values[comp.label_full]
+ self._transfered_start_values.append(start_values_of_this_segment)
- def _calculate_timesteps_of_segment(self) -> list[pd.DatetimeIndex]:
- active_timesteps_per_segment = []
+ def _calculate_timesteps_per_segment(self) -> list[pd.DatetimeIndex]:
+ timesteps_per_segment = []
for i, _ in enumerate(self.segment_names):
start = self.timesteps_per_segment * i
end = min(start + self.timesteps_per_segment_with_overlap, len(self.all_timesteps))
- active_timesteps_per_segment.append(self.all_timesteps[start:end])
- return active_timesteps_per_segment
+ timesteps_per_segment.append(self.all_timesteps[start:end])
+ return timesteps_per_segment
@property
def timesteps_per_segment_with_overlap(self):
return self.timesteps_per_segment + self.overlap_timesteps
@property
- def start_values_of_segments(self) -> dict[int, dict[str, Any]]:
+ def start_values_of_segments(self) -> list[dict[str, Any]]:
"""Gives an overview of the start values of all Segments"""
- return {
- 0: {element.label_full: value for element, value in self._original_start_values.items()},
- **{i: start_values for i, start_values in enumerate(self._transfered_start_values, 1)},
- }
+ return [{name: value for name, value in self._original_start_values.items()}] + [
+ start_values for start_values in self._transfered_start_values
+ ]
+
+ @property
+ def all_timesteps(self) -> pd.DatetimeIndex:
+ return self.flow_system.timesteps
diff --git a/flixopt/components.py b/flixopt/components.py
index 2ad8d90e8..c40e6af88 100644
--- a/flixopt/components.py
+++ b/flixopt/components.py
@@ -9,13 +9,14 @@
from typing import TYPE_CHECKING, Literal
import numpy as np
+import xarray as xr
-from . import utils
-from .core import NumericData, NumericDataTS, PlausibilityError, Scalar, TimeSeries
+from .core import PeriodicDataUser, PlausibilityError, TemporalData, TemporalDataUser
from .elements import Component, ComponentModel, Flow
-from .features import InvestmentModel, OnOffModel, PiecewiseModel
+from .features import InvestmentModel, PiecewiseModel
from .interface import InvestParameters, OnOffParameters, PiecewiseConversion
-from .structure import SystemModel, register_class_for_io
+from .modeling import BoundingPatterns
+from .structure import FlowSystemModel, register_class_for_io
if TYPE_CHECKING:
import linopy
@@ -39,6 +40,10 @@ class LinearConverter(Component):
straightforward linear relationships, or piecewise conversion for complex non-linear
behavior approximated through piecewise linear segments.
+ Mathematical Formulation:
+ See the complete mathematical model in the documentation:
+ [LinearConverter](../user-guide/mathematical-notation/elements/LinearConverter.md)
+
Args:
label: The label of the Element. Used to identify it in the FlowSystem.
inputs: list of input Flows that feed into the converter.
@@ -141,11 +146,11 @@ class LinearConverter(Component):
Note:
Conversion factors define linear relationships where the sum of (coefficient × flow_rate)
equals zero for each equation: factor1×flow1 + factor2×flow2 + ... = 0
- Conversion factors define linear relationships.
- `{flow1: a1, flow2: a2, ...}` leads to `a1×flow_rate1 + a2×flow_rate2 + ... = 0`
- Unfortunately the current input format doest read intuitively:
- {"electricity": 1, "H2": 50} means that the electricity_in flow rate is multiplied by 1
- and the hydrogen_out flow rate is multiplied by 50. THis leads to 50 electricity --> 1 H2.
+ Conversion factors define linear relationships:
+ `{flow1: a1, flow2: a2, ...}` yields `a1×flow_rate1 + a2×flow_rate2 + ... = 0`.
+ Note: The input format may be unintuitive. For example,
+ `{"electricity": 1, "H2": 50}` implies `1×electricity = 50×H2`,
+ i.e., 50 units of electricity produce 1 unit of H2.
The system must have fewer conversion factors than total flows (degrees of freedom > 0)
to avoid over-constraining the problem. For n total flows, use at most n-1 conversion factors.
@@ -161,7 +166,7 @@ def __init__(
inputs: list[Flow],
outputs: list[Flow],
on_off_parameters: OnOffParameters | None = None,
- conversion_factors: list[dict[str, NumericDataTS]] | None = None,
+ conversion_factors: list[dict[str, TemporalDataUser]] | None = None,
piecewise_conversion: PiecewiseConversion | None = None,
meta_data: dict | None = None,
):
@@ -169,10 +174,10 @@ def __init__(
self.conversion_factors = conversion_factors or []
self.piecewise_conversion = piecewise_conversion
- def create_model(self, model: SystemModel) -> LinearConverterModel:
+ def create_model(self, model: FlowSystemModel) -> LinearConverterModel:
self._plausibility_checks()
- self.model = LinearConverterModel(model, self)
- return self.model
+ self.submodel = LinearConverterModel(model, self)
+ return self.submodel
def _plausibility_checks(self) -> None:
super()._plausibility_checks()
@@ -198,26 +203,29 @@ def _plausibility_checks(self) -> None:
if self.piecewise_conversion:
for flow in self.flows.values():
if isinstance(flow.size, InvestParameters) and flow.size.fixed_size is None:
- raise PlausibilityError(
- f'piecewise_conversion (in {self.label_full}) and variable size '
- f'(in flow {flow.label_full}) do not make sense together!'
+ logger.warning(
+ f'Using a Flow with variable size (InvestParameters without fixed_size) '
+ f'and a piecewise_conversion in {self.label_full} is uncommon. Please verify intent '
+ f'({flow.label_full}).'
)
- def transform_data(self, flow_system: FlowSystem):
- super().transform_data(flow_system)
+ def transform_data(self, flow_system: FlowSystem, name_prefix: str = '') -> None:
+ prefix = '|'.join(filter(None, [name_prefix, self.label_full]))
+ super().transform_data(flow_system, prefix)
if self.conversion_factors:
self.conversion_factors = self._transform_conversion_factors(flow_system)
if self.piecewise_conversion:
- self.piecewise_conversion.transform_data(flow_system, f'{self.label_full}|PiecewiseConversion')
+ self.piecewise_conversion.has_time_dim = True
+ self.piecewise_conversion.transform_data(flow_system, f'{prefix}|PiecewiseConversion')
- def _transform_conversion_factors(self, flow_system: FlowSystem) -> list[dict[str, TimeSeries]]:
- """macht alle Faktoren, die nicht TimeSeries sind, zu TimeSeries"""
+ def _transform_conversion_factors(self, flow_system: FlowSystem) -> list[dict[str, xr.DataArray]]:
+ """Converts all conversion factors to internal datatypes"""
list_of_conversion_factors = []
for idx, conversion_factor in enumerate(self.conversion_factors):
transformed_dict = {}
for flow, values in conversion_factor.items():
# TODO: Might be better to use the label of the component instead of the flow
- ts = flow_system.create_time_series(f'{self.flows[flow].label_full}|conversion_factor{idx}', values)
+ ts = flow_system.fit_to_model_coords(f'{self.flows[flow].label_full}|conversion_factor{idx}', values)
if ts is None:
raise PlausibilityError(f'{self.label_full}: conversion factor for flow "{flow}" must not be None')
transformed_dict[flow] = ts
@@ -243,37 +251,41 @@ class Storage(Component):
final state constraints, and time-varying parameters. It supports both fixed-size
and investment-optimized storage systems with comprehensive techno-economic modeling.
+ Mathematical Formulation:
+ See the complete mathematical model in the documentation:
+ [Storage](../user-guide/mathematical-notation/elements/Storage.md)
+
+ - Equation (1): Charge state bounds
+ - Equation (3): Storage balance (charge state evolution)
+
+ Variable Mapping:
+ - ``capacity_in_flow_hours`` → C (storage capacity)
+ - ``charge_state`` → c(t_i) (state of charge at time t_i)
+ - ``relative_loss_per_hour`` → ċ_rel,loss (self-discharge rate)
+ - ``eta_charge`` → η_in (charging efficiency)
+ - ``eta_discharge`` → η_out (discharging efficiency)
+
Args:
- label: The label of the Element. Used to identify it in the FlowSystem.
- charging: Incoming flow for loading the storage. Represents energy or material
- flowing into the storage system.
- discharging: Outgoing flow for unloading the storage. Represents energy or
- material flowing out of the storage system.
- capacity_in_flow_hours: Nominal capacity/size of the storage in flow-hours
- (e.g., kWh for electrical storage, m³ or kg for material storage). Can be a scalar
- for fixed capacity or InvestParameters for optimization.
- relative_minimum_charge_state: Minimum relative charge state (0-1 range).
- Prevents deep discharge that could damage equipment. Default is 0.
- relative_maximum_charge_state: Maximum relative charge state (0-1 range).
- Accounts for practical capacity limits, safety margins or temperature impacts. Default is 1.
- initial_charge_state: Storage charge state at the beginning of the time horizon.
- Can be numeric value or 'lastValueOfSim', which is recommended for if the initial start state is not known.
- Default is 0.
- minimal_final_charge_state: Minimum absolute charge state required at the end
- of the time horizon. Useful for ensuring energy security or meeting contracts.
- maximal_final_charge_state: Maximum absolute charge state allowed at the end
- of the time horizon. Useful for preventing overcharge or managing inventory.
- eta_charge: Charging efficiency factor (0-1 range). Accounts for conversion
- losses during charging. Default is 1 (perfect efficiency).
- eta_discharge: Discharging efficiency factor (0-1 range). Accounts for
- conversion losses during discharging. Default is 1 (perfect efficiency).
- relative_loss_per_hour: Self-discharge rate per hour (typically 0-0.1 range).
- Represents standby losses, leakage, or degradation. Default is 0.
- prevent_simultaneous_charge_and_discharge: If True, prevents charging and
- discharging simultaneously. Increases binary variables but improves model
- realism and solution interpretation. Default is True.
- meta_data: Used to store additional information about the Element. Not used
- internally, but saved in results. Only use Python native types.
+ label: Element identifier used in the FlowSystem.
+ charging: Incoming flow for loading the storage.
+ discharging: Outgoing flow for unloading the storage.
+ capacity_in_flow_hours: Storage capacity in flow-hours (kWh, m³, kg).
+ Scalar for fixed size or InvestParameters for optimization.
+ relative_minimum_charge_state: Minimum charge state (0-1). Default: 0.
+ relative_maximum_charge_state: Maximum charge state (0-1). Default: 1.
+ initial_charge_state: Charge at start. Numeric or 'lastValueOfSim'. Default: 0.
+ minimal_final_charge_state: Minimum absolute charge required at end (optional).
+ maximal_final_charge_state: Maximum absolute charge allowed at end (optional).
+ relative_minimum_final_charge_state: Minimum relative charge at end.
+ Defaults to last value of relative_minimum_charge_state.
+ relative_maximum_final_charge_state: Maximum relative charge at end.
+ Defaults to last value of relative_maximum_charge_state.
+ eta_charge: Charging efficiency (0-1). Default: 1.
+ eta_discharge: Discharging efficiency (0-1). Default: 1.
+ relative_loss_per_hour: Self-discharge per hour (0-0.1). Default: 0.
+ prevent_simultaneous_charge_and_discharge: Prevent charging and discharging
+ simultaneously. Adds binary variables. Default: True.
+ meta_data: Additional information stored in results. Python native types only.
Examples:
Battery energy storage system:
@@ -349,20 +361,19 @@ class Storage(Component):
```
Note:
- Charge state evolution follows the equation:
- charge[t+1] = charge[t] × (1-loss_rate)^hours_per_step +
- charge_flow[t] × eta_charge × hours_per_step -
- discharge_flow[t] × hours_per_step / eta_discharge
+ **Mathematical formulation**: See [Storage](../user-guide/mathematical-notation/elements/Storage.md)
+ for charge state evolution equations and balance constraints.
- All efficiency parameters (eta_charge, eta_discharge) are dimensionless (0-1 range).
- The relative_loss_per_hour parameter represents exponential decay per hour.
+ **Efficiency parameters** (eta_charge, eta_discharge) are dimensionless (0-1 range).
+ The relative_loss_per_hour represents exponential decay per hour.
- When prevent_simultaneous_charge_and_discharge is True, binary variables are
- created to enforce mutual exclusivity, which increases solution time but
- prevents unrealistic simultaneous charging and discharging.
+ **Binary variables**: When prevent_simultaneous_charge_and_discharge is True, binary
+ variables enforce mutual exclusivity, increasing solution time but preventing unrealistic
+ simultaneous charging and discharging.
- Initial and final charge state constraints use absolute values (not relative),
- matching the capacity_in_flow_hours units.
+ **Units**: Flow rates and charge states are related by the concept of 'flow hours' (=flow_rate * time).
+ With flow rates in kW, the charge state is therefore (usually) kWh.
+ With flow rates in m3/h, the charge state is therefore in m3.
"""
def __init__(
@@ -370,16 +381,19 @@ def __init__(
label: str,
charging: Flow,
discharging: Flow,
- capacity_in_flow_hours: Scalar | InvestParameters,
- relative_minimum_charge_state: NumericData = 0,
- relative_maximum_charge_state: NumericData = 1,
- initial_charge_state: Scalar | Literal['lastValueOfSim'] = 0,
- minimal_final_charge_state: Scalar | None = None,
- maximal_final_charge_state: Scalar | None = None,
- eta_charge: NumericData = 1,
- eta_discharge: NumericData = 1,
- relative_loss_per_hour: NumericData = 0,
+ capacity_in_flow_hours: PeriodicDataUser | InvestParameters,
+ relative_minimum_charge_state: TemporalDataUser = 0,
+ relative_maximum_charge_state: TemporalDataUser = 1,
+ initial_charge_state: PeriodicDataUser | Literal['lastValueOfSim'] = 0,
+ minimal_final_charge_state: PeriodicDataUser | None = None,
+ maximal_final_charge_state: PeriodicDataUser | None = None,
+ relative_minimum_final_charge_state: PeriodicDataUser | None = None,
+ relative_maximum_final_charge_state: PeriodicDataUser | None = None,
+ eta_charge: TemporalDataUser = 1,
+ eta_discharge: TemporalDataUser = 1,
+ relative_loss_per_hour: TemporalDataUser = 0,
prevent_simultaneous_charge_and_discharge: bool = True,
+ balanced: bool = False,
meta_data: dict | None = None,
):
# TODO: fixed_relative_chargeState implementieren
@@ -394,72 +408,125 @@ def __init__(
self.charging = charging
self.discharging = discharging
self.capacity_in_flow_hours = capacity_in_flow_hours
- self.relative_minimum_charge_state: NumericDataTS = relative_minimum_charge_state
- self.relative_maximum_charge_state: NumericDataTS = relative_maximum_charge_state
+ self.relative_minimum_charge_state: TemporalDataUser = relative_minimum_charge_state
+ self.relative_maximum_charge_state: TemporalDataUser = relative_maximum_charge_state
+
+ self.relative_minimum_final_charge_state = relative_minimum_final_charge_state
+ self.relative_maximum_final_charge_state = relative_maximum_final_charge_state
self.initial_charge_state = initial_charge_state
self.minimal_final_charge_state = minimal_final_charge_state
self.maximal_final_charge_state = maximal_final_charge_state
- self.eta_charge: NumericDataTS = eta_charge
- self.eta_discharge: NumericDataTS = eta_discharge
- self.relative_loss_per_hour: NumericDataTS = relative_loss_per_hour
+ self.eta_charge: TemporalDataUser = eta_charge
+ self.eta_discharge: TemporalDataUser = eta_discharge
+ self.relative_loss_per_hour: TemporalDataUser = relative_loss_per_hour
self.prevent_simultaneous_charge_and_discharge = prevent_simultaneous_charge_and_discharge
+ self.balanced = balanced
- def create_model(self, model: SystemModel) -> StorageModel:
+ def create_model(self, model: FlowSystemModel) -> StorageModel:
self._plausibility_checks()
- self.model = StorageModel(model, self)
- return self.model
-
- def transform_data(self, flow_system: FlowSystem) -> None:
- super().transform_data(flow_system)
- self.relative_minimum_charge_state = flow_system.create_time_series(
- f'{self.label_full}|relative_minimum_charge_state',
+ self.submodel = StorageModel(model, self)
+ return self.submodel
+
+ def transform_data(self, flow_system: FlowSystem, name_prefix: str = '') -> None:
+ prefix = '|'.join(filter(None, [name_prefix, self.label_full]))
+ super().transform_data(flow_system, prefix)
+ self.relative_minimum_charge_state = flow_system.fit_to_model_coords(
+ f'{prefix}|relative_minimum_charge_state',
self.relative_minimum_charge_state,
- needs_extra_timestep=True,
)
- self.relative_maximum_charge_state = flow_system.create_time_series(
- f'{self.label_full}|relative_maximum_charge_state',
+ self.relative_maximum_charge_state = flow_system.fit_to_model_coords(
+ f'{prefix}|relative_maximum_charge_state',
self.relative_maximum_charge_state,
- needs_extra_timestep=True,
)
- self.eta_charge = flow_system.create_time_series(f'{self.label_full}|eta_charge', self.eta_charge)
- self.eta_discharge = flow_system.create_time_series(f'{self.label_full}|eta_discharge', self.eta_discharge)
- self.relative_loss_per_hour = flow_system.create_time_series(
- f'{self.label_full}|relative_loss_per_hour', self.relative_loss_per_hour
+ self.eta_charge = flow_system.fit_to_model_coords(f'{prefix}|eta_charge', self.eta_charge)
+ self.eta_discharge = flow_system.fit_to_model_coords(f'{prefix}|eta_discharge', self.eta_discharge)
+ self.relative_loss_per_hour = flow_system.fit_to_model_coords(
+ f'{prefix}|relative_loss_per_hour', self.relative_loss_per_hour
+ )
+ if not isinstance(self.initial_charge_state, str):
+ self.initial_charge_state = flow_system.fit_to_model_coords(
+ f'{prefix}|initial_charge_state', self.initial_charge_state, dims=['period', 'scenario']
+ )
+ self.minimal_final_charge_state = flow_system.fit_to_model_coords(
+ f'{prefix}|minimal_final_charge_state', self.minimal_final_charge_state, dims=['period', 'scenario']
+ )
+ self.maximal_final_charge_state = flow_system.fit_to_model_coords(
+ f'{prefix}|maximal_final_charge_state', self.maximal_final_charge_state, dims=['period', 'scenario']
+ )
+ self.relative_minimum_final_charge_state = flow_system.fit_to_model_coords(
+ f'{prefix}|relative_minimum_final_charge_state',
+ self.relative_minimum_final_charge_state,
+ dims=['period', 'scenario'],
+ )
+ self.relative_maximum_final_charge_state = flow_system.fit_to_model_coords(
+ f'{prefix}|relative_maximum_final_charge_state',
+ self.relative_maximum_final_charge_state,
+ dims=['period', 'scenario'],
)
if isinstance(self.capacity_in_flow_hours, InvestParameters):
- self.capacity_in_flow_hours.transform_data(flow_system)
+ self.capacity_in_flow_hours.transform_data(flow_system, f'{prefix}|InvestParameters')
+ else:
+ self.capacity_in_flow_hours = flow_system.fit_to_model_coords(
+ f'{prefix}|capacity_in_flow_hours', self.capacity_in_flow_hours, dims=['period', 'scenario']
+ )
def _plausibility_checks(self) -> None:
"""
Check for infeasible or uncommon combinations of parameters
"""
super()._plausibility_checks()
- if utils.is_number(self.initial_charge_state):
- if isinstance(self.capacity_in_flow_hours, InvestParameters):
- if self.capacity_in_flow_hours.fixed_size is None:
- maximum_capacity = self.capacity_in_flow_hours.maximum_size
- minimum_capacity = self.capacity_in_flow_hours.minimum_size
- else:
- maximum_capacity = self.capacity_in_flow_hours.fixed_size
- minimum_capacity = self.capacity_in_flow_hours.fixed_size
+
+ # Validate string values and set flag
+ initial_is_last = False
+ if isinstance(self.initial_charge_state, str):
+ if self.initial_charge_state == 'lastValueOfSim':
+ initial_is_last = True
else:
- maximum_capacity = self.capacity_in_flow_hours
- minimum_capacity = self.capacity_in_flow_hours
+ raise PlausibilityError(f'initial_charge_state has undefined value: {self.initial_charge_state}')
- minimum_initial_capacity = maximum_capacity * self.relative_minimum_charge_state.isel(time=1)
- maximum_initial_capacity = minimum_capacity * self.relative_maximum_charge_state.isel(time=1)
- if self.initial_charge_state > maximum_initial_capacity:
- raise ValueError(
- f'{self.label_full}: {self.initial_charge_state=} is above allowed maximum {maximum_initial_capacity}'
+ # Use new InvestParameters methods to get capacity bounds
+ if isinstance(self.capacity_in_flow_hours, InvestParameters):
+ minimum_capacity = self.capacity_in_flow_hours.minimum_or_fixed_size
+ maximum_capacity = self.capacity_in_flow_hours.maximum_or_fixed_size
+ else:
+ maximum_capacity = self.capacity_in_flow_hours
+ minimum_capacity = self.capacity_in_flow_hours
+
+ # Initial capacity should not constraint investment decision
+ minimum_initial_capacity = maximum_capacity * self.relative_minimum_charge_state.isel(time=0)
+ maximum_initial_capacity = minimum_capacity * self.relative_maximum_charge_state.isel(time=0)
+
+ # Only perform numeric comparisons if not using 'lastValueOfSim'
+ if not initial_is_last:
+ if (self.initial_charge_state > maximum_initial_capacity).any():
+ raise PlausibilityError(
+ f'{self.label_full}: {self.initial_charge_state=} '
+ f'is constraining the investment decision. Chosse a value above {maximum_initial_capacity}'
)
- if self.initial_charge_state < minimum_initial_capacity:
- raise ValueError(
- f'{self.label_full}: {self.initial_charge_state=} is below allowed minimum {minimum_initial_capacity}'
+ if (self.initial_charge_state < minimum_initial_capacity).any():
+ raise PlausibilityError(
+ f'{self.label_full}: {self.initial_charge_state=} '
+ f'is constraining the investment decision. Chosse a value below {minimum_initial_capacity}'
+ )
+
+ if self.balanced:
+ if not isinstance(self.charging.size, InvestParameters) or not isinstance(
+ self.discharging.size, InvestParameters
+ ):
+ raise PlausibilityError(
+ f'Balancing charging and discharging Flows in {self.label_full} is only possible with Investments.'
+ )
+
+ if (self.charging.size.minimum_size > self.discharging.size.maximum_size).any() or (
+ self.charging.size.maximum_size < self.discharging.size.minimum_size
+ ).any():
+ raise PlausibilityError(
+ f'Balancing charging and discharging Flows in {self.label_full} need compatible minimum and maximum sizes.'
+ f'Got: {self.charging.size.minimum_size=}, {self.charging.size.maximum_size=} and '
+ f'{self.discharging.size.minimum_size=}, {self.discharging.size.maximum_size=}.'
)
- elif self.initial_charge_state != 'lastValueOfSim':
- raise ValueError(f'{self.label_full}: {self.initial_charge_state=} has an invalid value')
@register_class_for_io
@@ -491,6 +558,7 @@ class Transmission(Component):
prevent_simultaneous_flows_in_both_directions: If True, prevents simultaneous
flow in both directions. Increases binary variables but reflects physical
reality for most transmission systems. Default is True.
+ balanced: Whether to equate the size of the in1 and in2 Flow. Needs InvestParameters in both Flows.
meta_data: Used to store additional information. Not used internally but saved
in results. Only use Python native types.
@@ -579,10 +647,11 @@ def __init__(
out1: Flow,
in2: Flow | None = None,
out2: Flow | None = None,
- relative_losses: NumericDataTS = 0,
- absolute_losses: NumericDataTS | None = None,
- on_off_parameters: OnOffParameters | None = None,
+ relative_losses: TemporalDataUser | None = None,
+ absolute_losses: TemporalDataUser | None = None,
+ on_off_parameters: OnOffParameters = None,
prevent_simultaneous_flows_in_both_directions: bool = True,
+ balanced: bool = False,
meta_data: dict | None = None,
):
super().__init__(
@@ -602,6 +671,7 @@ def __init__(
self.relative_losses = relative_losses
self.absolute_losses = absolute_losses
+ self.balanced = balanced
def _plausibility_checks(self):
super()._plausibility_checks()
@@ -614,51 +684,47 @@ def _plausibility_checks(self):
assert self.out2.bus == self.in1.bus, (
f'Input 1 and Output 2 do not start/end at the same Bus: {self.in1.bus=}, {self.out2.bus=}'
)
- # Check Investments
- for flow in [self.out1, self.in2, self.out2]:
- if flow is not None and isinstance(flow.size, InvestParameters):
+
+ if self.balanced:
+ if self.in2 is None:
+ raise ValueError('Balanced Transmission needs InvestParameters in both in-Flows')
+ if not isinstance(self.in1.size, InvestParameters) or not isinstance(self.in2.size, InvestParameters):
+ raise ValueError('Balanced Transmission needs InvestParameters in both in-Flows')
+ if (self.in1.size.minimum_or_fixed_size > self.in2.size.maximum_or_fixed_size).any() or (
+ self.in1.size.maximum_or_fixed_size < self.in2.size.minimum_or_fixed_size
+ ).any():
raise ValueError(
- 'Transmission currently does not support separate InvestParameters for Flows. '
- 'Please use Flow in1. The size of in2 is equal to in1. THis is handled internally'
+ f'Balanced Transmission needs compatible minimum and maximum sizes.'
+ f'Got: {self.in1.size.minimum_size=}, {self.in1.size.maximum_size=}, {self.in1.size.fixed_size=} and '
+ f'{self.in2.size.minimum_size=}, {self.in2.size.maximum_size=}, {self.in2.size.fixed_size=}.'
)
def create_model(self, model) -> TransmissionModel:
self._plausibility_checks()
- self.model = TransmissionModel(model, self)
- return self.model
+ self.submodel = TransmissionModel(model, self)
+ return self.submodel
- def transform_data(self, flow_system: FlowSystem) -> None:
- super().transform_data(flow_system)
- self.relative_losses = flow_system.create_time_series(
- f'{self.label_full}|relative_losses', self.relative_losses
- )
- self.absolute_losses = flow_system.create_time_series(
- f'{self.label_full}|absolute_losses', self.absolute_losses
- )
+ def transform_data(self, flow_system: FlowSystem, name_prefix: str = '') -> None:
+ prefix = '|'.join(filter(None, [name_prefix, self.label_full]))
+ super().transform_data(flow_system, prefix)
+ self.relative_losses = flow_system.fit_to_model_coords(f'{prefix}|relative_losses', self.relative_losses)
+ self.absolute_losses = flow_system.fit_to_model_coords(f'{prefix}|absolute_losses', self.absolute_losses)
class TransmissionModel(ComponentModel):
- def __init__(self, model: SystemModel, element: Transmission):
- super().__init__(model, element)
- self.element: Transmission = element
- self.on_off: OnOffModel | None = None
+ element: Transmission
- def do_modeling(self):
- """Initiates all FlowModels"""
- # Force On Variable if absolute losses are present
- if (self.element.absolute_losses is not None) and np.any(self.element.absolute_losses.active_data != 0):
- for flow in self.element.inputs + self.element.outputs:
+ def __init__(self, model: FlowSystemModel, element: Transmission):
+ if (element.absolute_losses is not None) and np.any(element.absolute_losses != 0):
+ for flow in element.inputs + element.outputs:
if flow.on_off_parameters is None:
flow.on_off_parameters = OnOffParameters()
- # Make sure either None or both in Flows have InvestParameters
- if self.element.in2 is not None:
- if isinstance(self.element.in1.size, InvestParameters) and not isinstance(
- self.element.in2.size, InvestParameters
- ):
- self.element.in2.size = InvestParameters(maximum_size=self.element.in1.size.maximum_size)
+ super().__init__(model, element)
- super().do_modeling()
+ def _do_modeling(self):
+ """Initiates all FlowModels"""
+ super()._do_modeling()
# first direction
self.create_transmission_equation('dir1', self.element.in1, self.element.out1)
@@ -668,43 +734,37 @@ def do_modeling(self):
self.create_transmission_equation('dir2', self.element.in2, self.element.out2)
# equate size of both directions
- if isinstance(self.element.in1.size, InvestParameters) and self.element.in2 is not None:
+ if self.element.balanced:
# eq: in1.size = in2.size
- self.add(
- self._model.add_constraints(
- self.element.in1.model._investment.size == self.element.in2.model._investment.size,
- name=f'{self.label_full}|same_size',
- ),
- 'same_size',
+ self.add_constraints(
+ self.element.in1.submodel._investment.size == self.element.in2.submodel._investment.size,
+ short_name='same_size',
)
def create_transmission_equation(self, name: str, in_flow: Flow, out_flow: Flow) -> linopy.Constraint:
"""Creates an Equation for the Transmission efficiency and adds it to the model"""
# eq: out(t) + on(t)*loss_abs(t) = in(t)*(1 - loss_rel(t))
- con_transmission = self.add(
- self._model.add_constraints(
- out_flow.model.flow_rate == -in_flow.model.flow_rate * (self.element.relative_losses.active_data - 1),
- name=f'{self.label_full}|{name}',
- ),
- name,
+ rel_losses = 0 if self.element.relative_losses is None else self.element.relative_losses
+ con_transmission = self.add_constraints(
+ out_flow.submodel.flow_rate == in_flow.submodel.flow_rate * (1 - rel_losses),
+ short_name=name,
)
if self.element.absolute_losses is not None:
- con_transmission.lhs += in_flow.model.on_off.on * self.element.absolute_losses.active_data
+ con_transmission.lhs += in_flow.submodel.on_off.on * self.element.absolute_losses
return con_transmission
class LinearConverterModel(ComponentModel):
- def __init__(self, model: SystemModel, element: LinearConverter):
- super().__init__(model, element)
- self.element: LinearConverter = element
- self.on_off: OnOffModel | None = None
- self.piecewise_conversion: PiecewiseModel | None = None
+ element: LinearConverter
- def do_modeling(self):
- super().do_modeling()
+ def __init__(self, model: FlowSystemModel, element: LinearConverter):
+ self.piecewise_conversion: PiecewiseConversion | None = None
+ super().__init__(model, element)
+ def _do_modeling(self):
+ super()._do_modeling()
# conversion_factors:
if self.element.conversion_factors:
all_input_flows = set(self.element.inputs)
@@ -716,146 +776,132 @@ def do_modeling(self):
used_inputs: set[Flow] = all_input_flows & used_flows
used_outputs: set[Flow] = all_output_flows & used_flows
- self.add(
- self._model.add_constraints(
- sum([flow.model.flow_rate * conv_factors[flow.label].active_data for flow in used_inputs])
- == sum([flow.model.flow_rate * conv_factors[flow.label].active_data for flow in used_outputs]),
- name=f'{self.label_full}|conversion_{i}',
- )
+ self.add_constraints(
+ sum([flow.submodel.flow_rate * conv_factors[flow.label] for flow in used_inputs])
+ == sum([flow.submodel.flow_rate * conv_factors[flow.label] for flow in used_outputs]),
+ short_name=f'conversion_{i}',
)
else:
# TODO: Improve Inclusion of OnOffParameters. Instead of creating a Binary in every flow, the binary could only be part of the Piece itself
piecewise_conversion = {
- self.element.flows[flow].model.flow_rate.name: piecewise
+ self.element.flows[flow].submodel.flow_rate.name: piecewise
for flow, piecewise in self.element.piecewise_conversion.items()
}
- self.piecewise_conversion = self.add(
+ self.piecewise_conversion = self.add_submodels(
PiecewiseModel(
model=self._model,
label_of_element=self.label_of_element,
+ label_of_model=f'{self.label_of_element}',
piecewise_variables=piecewise_conversion,
zero_point=self.on_off.on if self.on_off is not None else False,
- as_time_series=True,
- )
+ dims=('time', 'period', 'scenario'),
+ ),
+ short_name='PiecewiseConversion',
)
- self.piecewise_conversion.do_modeling()
class StorageModel(ComponentModel):
- """Model of Storage"""
+ """Submodel of Storage"""
+
+ element: Storage
- def __init__(self, model: SystemModel, element: Storage):
+ def __init__(self, model: FlowSystemModel, element: Storage):
super().__init__(model, element)
- self.element: Storage = element
- self.charge_state: linopy.Variable | None = None
- self.netto_discharge: linopy.Variable | None = None
- self._investment: InvestmentModel | None = None
-
- def do_modeling(self):
- super().do_modeling()
-
- lb, ub = self.absolute_charge_state_bounds
- self.charge_state = self.add(
- self._model.add_variables(
- lower=lb, upper=ub, coords=self._model.coords_extra, name=f'{self.label_full}|charge_state'
- ),
- 'charge_state',
- )
- self.netto_discharge = self.add(
- self._model.add_variables(coords=self._model.coords, name=f'{self.label_full}|netto_discharge'),
- 'netto_discharge',
+
+ def _do_modeling(self):
+ super()._do_modeling()
+
+ lb, ub = self._absolute_charge_state_bounds
+ self.add_variables(
+ lower=lb,
+ upper=ub,
+ coords=self._model.get_coords(extra_timestep=True),
+ short_name='charge_state',
)
+
+ self.add_variables(coords=self._model.get_coords(), short_name='netto_discharge')
+
# netto_discharge:
# eq: nettoFlow(t) - discharging(t) + charging(t) = 0
- self.add(
- self._model.add_constraints(
- self.netto_discharge
- == self.element.discharging.model.flow_rate - self.element.charging.model.flow_rate,
- name=f'{self.label_full}|netto_discharge',
- ),
- 'netto_discharge',
+ self.add_constraints(
+ self.netto_discharge
+ == self.element.discharging.submodel.flow_rate - self.element.charging.submodel.flow_rate,
+ short_name='netto_discharge',
)
charge_state = self.charge_state
- rel_loss = self.element.relative_loss_per_hour.active_data
+ rel_loss = self.element.relative_loss_per_hour
hours_per_step = self._model.hours_per_step
- charge_rate = self.element.charging.model.flow_rate
- discharge_rate = self.element.discharging.model.flow_rate
- eff_charge = self.element.eta_charge.active_data
- eff_discharge = self.element.eta_discharge.active_data
-
- self.add(
- self._model.add_constraints(
- charge_state.isel(time=slice(1, None))
- == charge_state.isel(time=slice(None, -1)) * ((1 - rel_loss) ** hours_per_step)
- + charge_rate * eff_charge * hours_per_step
- - discharge_rate * hours_per_step / eff_discharge,
- name=f'{self.label_full}|charge_state',
- ),
- 'charge_state',
+ charge_rate = self.element.charging.submodel.flow_rate
+ discharge_rate = self.element.discharging.submodel.flow_rate
+ eff_charge = self.element.eta_charge
+ eff_discharge = self.element.eta_discharge
+
+ self.add_constraints(
+ charge_state.isel(time=slice(1, None))
+ == charge_state.isel(time=slice(None, -1)) * ((1 - rel_loss) ** hours_per_step)
+ + charge_rate * eff_charge * hours_per_step
+ - discharge_rate * hours_per_step / eff_discharge,
+ short_name='charge_state',
)
if isinstance(self.element.capacity_in_flow_hours, InvestParameters):
- self._investment = InvestmentModel(
- model=self._model,
- label_of_element=self.label_of_element,
- parameters=self.element.capacity_in_flow_hours,
- defining_variable=self.charge_state,
- relative_bounds_of_defining_variable=self.relative_charge_state_bounds,
+ self.add_submodels(
+ InvestmentModel(
+ model=self._model,
+ label_of_element=self.label_of_element,
+ label_of_model=self.label_of_element,
+ parameters=self.element.capacity_in_flow_hours,
+ ),
+ short_name='investment',
+ )
+
+ BoundingPatterns.scaled_bounds(
+ self,
+ variable=self.charge_state,
+ scaling_variable=self.investment.size,
+ relative_bounds=self._relative_charge_state_bounds,
)
- self.sub_models.append(self._investment)
- self._investment.do_modeling()
# Initial charge state
self._initial_and_final_charge_state()
+ if self.element.balanced:
+ self.add_constraints(
+ self.element.charging.submodel._investment.size * 1
+ == self.element.discharging.submodel._investment.size * 1,
+ short_name='balanced_sizes',
+ )
+
def _initial_and_final_charge_state(self):
if self.element.initial_charge_state is not None:
- name_short = 'initial_charge_state'
- name = f'{self.label_full}|{name_short}'
-
- if utils.is_number(self.element.initial_charge_state):
- self.add(
- self._model.add_constraints(
- self.charge_state.isel(time=0) == self.element.initial_charge_state, name=name
- ),
- name_short,
- )
- elif self.element.initial_charge_state == 'lastValueOfSim':
- self.add(
- self._model.add_constraints(
- self.charge_state.isel(time=0) == self.charge_state.isel(time=-1), name=name
- ),
- name_short,
+ if isinstance(self.element.initial_charge_state, str):
+ self.add_constraints(
+ self.charge_state.isel(time=0) == self.charge_state.isel(time=-1), short_name='initial_charge_state'
)
- else: # TODO: Validation in Storage Class, not in Model
- raise PlausibilityError(
- f'initial_charge_state has undefined value: {self.element.initial_charge_state}'
+ else:
+ self.add_constraints(
+ self.charge_state.isel(time=0) == self.element.initial_charge_state,
+ short_name='initial_charge_state',
)
if self.element.maximal_final_charge_state is not None:
- self.add(
- self._model.add_constraints(
- self.charge_state.isel(time=-1) <= self.element.maximal_final_charge_state,
- name=f'{self.label_full}|final_charge_max',
- ),
- 'final_charge_max',
+ self.add_constraints(
+ self.charge_state.isel(time=-1) <= self.element.maximal_final_charge_state,
+ short_name='final_charge_max',
)
if self.element.minimal_final_charge_state is not None:
- self.add(
- self._model.add_constraints(
- self.charge_state.isel(time=-1) >= self.element.minimal_final_charge_state,
- name=f'{self.label_full}|final_charge_min',
- ),
- 'final_charge_min',
+ self.add_constraints(
+ self.charge_state.isel(time=-1) >= self.element.minimal_final_charge_state,
+ short_name='final_charge_min',
)
@property
- def absolute_charge_state_bounds(self) -> tuple[NumericData, NumericData]:
- relative_lower_bound, relative_upper_bound = self.relative_charge_state_bounds
+ def _absolute_charge_state_bounds(self) -> tuple[TemporalData, TemporalData]:
+ relative_lower_bound, relative_upper_bound = self._relative_charge_state_bounds
if not isinstance(self.element.capacity_in_flow_hours, InvestParameters):
return (
relative_lower_bound * self.element.capacity_in_flow_hours,
@@ -868,11 +914,55 @@ def absolute_charge_state_bounds(self) -> tuple[NumericData, NumericData]:
)
@property
- def relative_charge_state_bounds(self) -> tuple[NumericData, NumericData]:
- return (
- self.element.relative_minimum_charge_state.active_data,
- self.element.relative_maximum_charge_state.active_data,
- )
+ def _relative_charge_state_bounds(self) -> tuple[xr.DataArray, xr.DataArray]:
+ """
+ Get relative charge state bounds with final timestep values.
+
+ Returns:
+ Tuple of (minimum_bounds, maximum_bounds) DataArrays extending to final timestep
+ """
+ final_coords = {'time': [self._model.flow_system.timesteps_extra[-1]]}
+
+ # Get final minimum charge state
+ if self.element.relative_minimum_final_charge_state is None:
+ min_final = self.element.relative_minimum_charge_state.isel(time=-1, drop=True)
+ else:
+ min_final = self.element.relative_minimum_final_charge_state
+ min_final = min_final.expand_dims('time').assign_coords(time=final_coords['time'])
+
+ # Get final maximum charge state
+ if self.element.relative_maximum_final_charge_state is None:
+ max_final = self.element.relative_maximum_charge_state.isel(time=-1, drop=True)
+ else:
+ max_final = self.element.relative_maximum_final_charge_state
+ max_final = max_final.expand_dims('time').assign_coords(time=final_coords['time'])
+ # Concatenate with original bounds
+ min_bounds = xr.concat([self.element.relative_minimum_charge_state, min_final], dim='time')
+ max_bounds = xr.concat([self.element.relative_maximum_charge_state, max_final], dim='time')
+
+ return min_bounds, max_bounds
+
+ @property
+ def _investment(self) -> InvestmentModel | None:
+ """Deprecated alias for investment"""
+ return self.investment
+
+ @property
+ def investment(self) -> InvestmentModel | None:
+ """OnOff feature"""
+ if 'investment' not in self.submodels:
+ return None
+ return self.submodels['investment']
+
+ @property
+ def charge_state(self) -> linopy.Variable:
+ """Charge state variable"""
+ return self['charge_state']
+
+ @property
+ def netto_discharge(self) -> linopy.Variable:
+ """Netto discharge variable"""
+ return self['netto_discharge']
@register_class_for_io
@@ -970,36 +1060,19 @@ def __init__(
meta_data: dict | None = None,
**kwargs,
):
- source = kwargs.pop('source', None)
- sink = kwargs.pop('sink', None)
- prevent_simultaneous_sink_and_source = kwargs.pop('prevent_simultaneous_sink_and_source', None)
- if source is not None:
- warnings.warn(
- 'The use of the source argument is deprecated. Use the outputs argument instead.',
- DeprecationWarning,
- stacklevel=2,
- )
- if outputs is not None:
- raise ValueError('Either source or outputs can be specified, but not both.')
- outputs = [source]
-
- if sink is not None:
- warnings.warn(
- 'The use of the sink argument is deprecated. Use the inputs argument instead.',
- DeprecationWarning,
- stacklevel=2,
- )
- if inputs is not None:
- raise ValueError('Either sink or inputs can be specified, but not both.')
- inputs = [sink]
-
- if prevent_simultaneous_sink_and_source is not None:
- warnings.warn(
- 'The use of the prevent_simultaneous_sink_and_source argument is deprecated. Use the prevent_simultaneous_flow_rates argument instead.',
- DeprecationWarning,
- stacklevel=2,
- )
- prevent_simultaneous_flow_rates = prevent_simultaneous_sink_and_source
+ # Handle deprecated parameters using centralized helper
+ outputs = self._handle_deprecated_kwarg(kwargs, 'source', 'outputs', outputs, transform=lambda x: [x])
+ inputs = self._handle_deprecated_kwarg(kwargs, 'sink', 'inputs', inputs, transform=lambda x: [x])
+ prevent_simultaneous_flow_rates = self._handle_deprecated_kwarg(
+ kwargs,
+ 'prevent_simultaneous_sink_and_source',
+ 'prevent_simultaneous_flow_rates',
+ prevent_simultaneous_flow_rates,
+ check_conflict=False,
+ )
+
+ # Validate any remaining unexpected kwargs
+ self._validate_kwargs(kwargs)
super().__init__(
label,
@@ -1122,16 +1195,11 @@ def __init__(
prevent_simultaneous_flow_rates: bool = False,
**kwargs,
):
- source = kwargs.pop('source', None)
- if source is not None:
- warnings.warn(
- 'The use of the source argument is deprecated. Use the outputs argument instead.',
- DeprecationWarning,
- stacklevel=2,
- )
- if outputs is not None:
- raise ValueError('Either source or outputs can be specified, but not both.')
- outputs = [source]
+ # Handle deprecated parameter using centralized helper
+ outputs = self._handle_deprecated_kwarg(kwargs, 'source', 'outputs', outputs, transform=lambda x: [x])
+
+ # Validate any remaining unexpected kwargs
+ self._validate_kwargs(kwargs)
self.prevent_simultaneous_flow_rates = prevent_simultaneous_flow_rates
super().__init__(
@@ -1250,16 +1318,11 @@ def __init__(
Note:
The deprecated `sink` kwarg is accepted for compatibility but will be removed in future releases.
"""
- sink = kwargs.pop('sink', None)
- if sink is not None:
- warnings.warn(
- 'The use of the sink argument is deprecated. Use the inputs argument instead.',
- DeprecationWarning,
- stacklevel=2,
- )
- if inputs is not None:
- raise ValueError('Either sink or inputs can be specified, but not both.')
- inputs = [sink]
+ # Handle deprecated parameter using centralized helper
+ inputs = self._handle_deprecated_kwarg(kwargs, 'sink', 'inputs', inputs, transform=lambda x: [x])
+
+ # Validate any remaining unexpected kwargs
+ self._validate_kwargs(kwargs)
self.prevent_simultaneous_flow_rates = prevent_simultaneous_flow_rates
super().__init__(
diff --git a/flixopt/config.py b/flixopt/config.py
index 2ec5bf88c..4ac8263b2 100644
--- a/flixopt/config.py
+++ b/flixopt/config.py
@@ -1,6 +1,7 @@
from __future__ import annotations
import logging
+import sys
import warnings
from logging.handlers import RotatingFileHandler
from pathlib import Path
@@ -25,19 +26,20 @@
'logging': MappingProxyType(
{
'level': 'INFO',
- 'file': 'flixopt.log',
+ 'file': None,
'rich': False,
- 'console': True,
+ 'console': False,
'max_file_size': 10_485_760, # 10MB
'backup_count': 5,
'date_format': '%Y-%m-%d %H:%M:%S',
'format': '%(message)s',
'console_width': 120,
'show_path': False,
+ 'show_logger_name': False,
'colors': MappingProxyType(
{
- 'DEBUG': '\033[32m', # Green
- 'INFO': '\033[34m', # Blue
+ 'DEBUG': '\033[90m', # Bright Black/Gray
+ 'INFO': '\033[0m', # Default/White
'WARNING': '\033[33m', # Yellow
'ERROR': '\033[31m', # Red
'CRITICAL': '\033[1m\033[31m', # Bold Red
@@ -59,156 +61,111 @@
class CONFIG:
"""Configuration for flixopt library.
- The CONFIG class provides centralized configuration for logging and modeling parameters.
- All changes require calling ``CONFIG.apply()`` to take effect.
-
- By default, logging outputs to both console and file ('flixopt.log').
+ Always call ``CONFIG.apply()`` after changes.
Attributes:
- Logging: Nested class containing all logging configuration options.
- Colors: Nested subclass under Logging containing ANSI color codes for log levels.
- Modeling: Nested class containing optimization modeling parameters.
- config_name (str): Name of the configuration (default: 'flixopt').
-
- Logging Attributes:
- level (str): Logging level: 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'.
- Default: 'INFO'
- file (str | None): Log file path. Default: 'flixopt.log'.
- Set to None to disable file logging.
- console (bool): Enable console (stdout) logging. Default: True
- rich (bool): Use Rich library for enhanced console output. Default: False
- max_file_size (int): Maximum log file size in bytes before rotation.
- Default: 10485760 (10MB)
- backup_count (int): Number of backup log files to keep. Default: 5
- date_format (str): Date/time format for log messages.
- Default: '%Y-%m-%d %H:%M:%S'
- format (str): Log message format string. Default: '%(message)s'
- console_width (int): Console width for Rich handler. Default: 120
- show_path (bool): Show file paths in log messages. Default: False
-
- Colors Attributes:
- DEBUG (str): ANSI color code for DEBUG level. Default: '\\033[32m' (green)
- INFO (str): ANSI color code for INFO level. Default: '\\033[34m' (blue)
- WARNING (str): ANSI color code for WARNING level. Default: '\\033[33m' (yellow)
- ERROR (str): ANSI color code for ERROR level. Default: '\\033[31m' (red)
- CRITICAL (str): ANSI color code for CRITICAL level. Default: '\\033[1m\\033[31m' (bold red)
-
- Works with both Rich and standard console handlers.
- Rich automatically converts ANSI codes using Style.from_ansi().
-
- Common ANSI codes:
-
- - '\\033[30m' - Black
- - '\\033[31m' - Red
- - '\\033[32m' - Green
- - '\\033[33m' - Yellow
- - '\\033[34m' - Blue
- - '\\033[35m' - Magenta
- - '\\033[36m' - Cyan
- - '\\033[37m' - White
- - '\\033[1m\\033[3Xm' - Bold color (replace X with color code 0-7)
- - '\\033[2m\\033[3Xm' - Dim color (replace X with color code 0-7)
-
- Examples:
-
- - Magenta: '\\033[35m'
- - Bold cyan: '\\033[1m\\033[36m'
- - Dim green: '\\033[2m\\033[32m'
-
- Modeling Attributes:
- big (int): Large number for optimization constraints. Default: 10000000
- epsilon (float): Small tolerance value. Default: 1e-5
- big_binary_bound (int): Upper bound for binary variable constraints.
- Default: 100000
+ Logging: Logging configuration.
+ Modeling: Optimization modeling parameters.
+ config_name: Configuration name.
Examples:
- Basic configuration::
-
- from flixopt import CONFIG
-
- CONFIG.Logging.console = True
- CONFIG.Logging.level = 'DEBUG'
- CONFIG.apply()
-
- Configure log file rotation::
-
- CONFIG.Logging.file = 'myapp.log'
- CONFIG.Logging.max_file_size = 5_242_880 # 5 MB
- CONFIG.Logging.backup_count = 3
- CONFIG.apply()
+ ```python
+ CONFIG.Logging.console = True
+ CONFIG.Logging.level = 'DEBUG'
+ CONFIG.apply()
+ ```
+
+ Load from YAML file:
+
+ ```yaml
+ logging:
+ level: DEBUG
+ console: true
+ file: app.log
+ ```
+ """
- Customize log colors::
+ class Logging:
+ """Logging configuration.
+
+ Silent by default. Enable via ``console=True`` or ``file='path'``.
+
+ Attributes:
+ level: Logging level.
+ file: Log file path for file logging.
+ console: Enable console output.
+ rich: Use Rich library for enhanced output.
+ max_file_size: Max file size before rotation.
+ backup_count: Number of backup files to keep.
+ date_format: Date/time format string.
+ format: Log message format string.
+ console_width: Console width for Rich handler.
+ show_path: Show file paths in messages.
+ show_logger_name: Show logger name in messages.
+ Colors: ANSI color codes for log levels.
- CONFIG.Logging.Colors.INFO = '\\033[35m' # Magenta
- CONFIG.Logging.Colors.DEBUG = '\\033[36m' # Cyan
- CONFIG.Logging.Colors.ERROR = '\\033[1m\\033[31m' # Bold red
+ Examples:
+ ```python
+ # File logging with rotation
+ CONFIG.Logging.file = 'app.log'
+ CONFIG.Logging.max_file_size = 5_242_880 # 5MB
CONFIG.apply()
- Use Rich handler with custom colors::
-
- CONFIG.Logging.console = True
+ # Rich handler with stdout
+ CONFIG.Logging.console = True # or 'stdout'
CONFIG.Logging.rich = True
- CONFIG.Logging.console_width = 100
- CONFIG.Logging.show_path = True
- CONFIG.Logging.Colors.INFO = '\\033[36m' # Cyan
CONFIG.apply()
- Load from YAML file::
-
- CONFIG.load_from_file('config.yaml')
-
- Example YAML config file:
-
- .. code-block:: yaml
-
- logging:
- level: DEBUG
- console: true
- file: app.log
- rich: true
- max_file_size: 5242880 # 5MB
- backup_count: 3
- date_format: '%H:%M:%S'
- console_width: 100
- show_path: true
- colors:
- DEBUG: "\\033[36m" # Cyan
- INFO: "\\033[32m" # Green
- WARNING: "\\033[33m" # Yellow
- ERROR: "\\033[31m" # Red
- CRITICAL: "\\033[1m\\033[31m" # Bold red
-
- modeling:
- big: 20000000
- epsilon: 1e-6
- big_binary_bound: 200000
-
- Reset to defaults::
-
- CONFIG.reset()
-
- Export current configuration::
-
- config_dict = CONFIG.to_dict()
- import yaml
-
- with open('my_config.yaml', 'w') as f:
- yaml.dump(config_dict, f)
- """
+ # Console output to stderr
+ CONFIG.Logging.console = 'stderr'
+ CONFIG.apply()
+ ```
+ """
- class Logging:
- level: str = _DEFAULTS['logging']['level']
+ level: Literal['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'] = _DEFAULTS['logging']['level']
file: str | None = _DEFAULTS['logging']['file']
rich: bool = _DEFAULTS['logging']['rich']
- console: bool = _DEFAULTS['logging']['console']
+ console: bool | Literal['stdout', 'stderr'] = _DEFAULTS['logging']['console']
max_file_size: int = _DEFAULTS['logging']['max_file_size']
backup_count: int = _DEFAULTS['logging']['backup_count']
date_format: str = _DEFAULTS['logging']['date_format']
format: str = _DEFAULTS['logging']['format']
console_width: int = _DEFAULTS['logging']['console_width']
show_path: bool = _DEFAULTS['logging']['show_path']
+ show_logger_name: bool = _DEFAULTS['logging']['show_logger_name']
class Colors:
+ """ANSI color codes for log levels.
+
+ Attributes:
+ DEBUG: ANSI color for DEBUG level.
+ INFO: ANSI color for INFO level.
+ WARNING: ANSI color for WARNING level.
+ ERROR: ANSI color for ERROR level.
+ CRITICAL: ANSI color for CRITICAL level.
+
+ Examples:
+ ```python
+ CONFIG.Logging.Colors.INFO = '\\033[32m' # Green
+ CONFIG.Logging.Colors.ERROR = '\\033[1m\\033[31m' # Bold red
+ CONFIG.apply()
+ ```
+
+ Common ANSI codes:
+ - '\\033[30m' - Black
+ - '\\033[31m' - Red
+ - '\\033[32m' - Green
+ - '\\033[33m' - Yellow
+ - '\\033[34m' - Blue
+ - '\\033[35m' - Magenta
+ - '\\033[36m' - Cyan
+ - '\\033[37m' - White
+ - '\\033[90m' - Bright Black/Gray
+ - '\\033[0m' - Reset to default
+ - '\\033[1m\\033[3Xm' - Bold (replace X with color code 0-7)
+ - '\\033[2m\\033[3Xm' - Dim (replace X with color code 0-7)
+ """
+
DEBUG: str = _DEFAULTS['logging']['colors']['DEBUG']
INFO: str = _DEFAULTS['logging']['colors']['INFO']
WARNING: str = _DEFAULTS['logging']['colors']['WARNING']
@@ -216,6 +173,14 @@ class Colors:
CRITICAL: str = _DEFAULTS['logging']['colors']['CRITICAL']
class Modeling:
+ """Optimization modeling parameters.
+
+ Attributes:
+ big: Large number for big-M constraints.
+ epsilon: Tolerance for numerical comparisons.
+ big_binary_bound: Upper bound for binary constraints.
+ """
+
big: int = _DEFAULTS['modeling']['big']
epsilon: float = _DEFAULTS['modeling']['epsilon']
big_binary_bound: int = _DEFAULTS['modeling']['big_binary_bound']
@@ -250,6 +215,18 @@ def apply(cls):
'ERROR': cls.Logging.Colors.ERROR,
'CRITICAL': cls.Logging.Colors.CRITICAL,
}
+ valid_levels = list(colors_dict)
+ if cls.Logging.level.upper() not in valid_levels:
+ raise ValueError(f"Invalid log level '{cls.Logging.level}'. Must be one of: {', '.join(valid_levels)}")
+
+ if cls.Logging.max_file_size <= 0:
+ raise ValueError('max_file_size must be positive')
+
+ if cls.Logging.backup_count < 0:
+ raise ValueError('backup_count must be non-negative')
+
+ if cls.Logging.console not in (False, True, 'stdout', 'stderr'):
+ raise ValueError(f"console must be False, True, 'stdout', or 'stderr', got {cls.Logging.console}")
_setup_logging(
default_level=cls.Logging.level,
@@ -262,25 +239,37 @@ def apply(cls):
format=cls.Logging.format,
console_width=cls.Logging.console_width,
show_path=cls.Logging.show_path,
+ show_logger_name=cls.Logging.show_logger_name,
colors=colors_dict,
)
@classmethod
def load_from_file(cls, config_file: str | Path):
- """Load configuration from YAML file and apply it."""
+ """Load configuration from YAML file and apply it.
+
+ Args:
+ config_file: Path to the YAML configuration file.
+
+ Raises:
+ FileNotFoundError: If the config file does not exist.
+ """
config_path = Path(config_file)
if not config_path.exists():
raise FileNotFoundError(f'Config file not found: {config_file}')
with config_path.open() as file:
- config_dict = yaml.safe_load(file)
+ config_dict = yaml.safe_load(file) or {}
cls._apply_config_dict(config_dict)
cls.apply()
@classmethod
def _apply_config_dict(cls, config_dict: dict):
- """Apply configuration dictionary to class attributes."""
+ """Apply configuration dictionary to class attributes.
+
+ Args:
+ config_dict: Dictionary containing configuration values.
+ """
for key, value in config_dict.items():
if key == 'logging' and isinstance(value, dict):
for nested_key, nested_value in value.items():
@@ -298,7 +287,11 @@ def _apply_config_dict(cls, config_dict: dict):
@classmethod
def to_dict(cls):
- """Convert the configuration class into a dictionary for JSON serialization."""
+ """Convert the configuration class into a dictionary for JSON serialization.
+
+ Returns:
+ Dictionary representation of the current configuration.
+ """
return {
'config_name': cls.config_name,
'logging': {
@@ -312,6 +305,7 @@ def to_dict(cls):
'format': cls.Logging.format,
'console_width': cls.Logging.console_width,
'show_path': cls.Logging.show_path,
+ 'show_logger_name': cls.Logging.show_logger_name,
'colors': {
'DEBUG': cls.Logging.Colors.DEBUG,
'INFO': cls.Logging.Colors.INFO,
@@ -328,40 +322,67 @@ def to_dict(cls):
}
-class MultilineFormater(logging.Formatter):
- """Formatter that handles multi-line messages with consistent prefixes."""
+class MultilineFormatter(logging.Formatter):
+ """Formatter that handles multi-line messages with consistent prefixes.
+
+ Args:
+ fmt: Log message format string.
+ datefmt: Date/time format string.
+ show_logger_name: Show logger name in log messages.
+ """
- def __init__(self, fmt=None, datefmt=None):
+ def __init__(self, fmt: str = '%(message)s', datefmt: str | None = None, show_logger_name: bool = False):
super().__init__(fmt=fmt, datefmt=datefmt)
+ self.show_logger_name = show_logger_name
- def format(self, record):
- message_lines = record.getMessage().split('\n')
+ def format(self, record) -> str:
+ record.message = record.getMessage()
+ message_lines = self._style.format(record).split('\n')
timestamp = self.formatTime(record, self.datefmt)
log_level = record.levelname.ljust(8)
- log_prefix = f'{timestamp} | {log_level} |'
- first_line = [f'{log_prefix} {message_lines[0]}']
- if len(message_lines) > 1:
- lines = first_line + [f'{log_prefix} {line}' for line in message_lines[1:]]
+ if self.show_logger_name:
+ # Truncate long logger names for readability
+ logger_name = record.name if len(record.name) <= 20 else f'...{record.name[-17:]}'
+ log_prefix = f'{timestamp} | {log_level} | {logger_name.ljust(20)} |'
else:
- lines = first_line
+ log_prefix = f'{timestamp} | {log_level} |'
+
+ indent = ' ' * (len(log_prefix) + 1) # +1 for the space after prefix
+
+ lines = [f'{log_prefix} {message_lines[0]}']
+ if len(message_lines) > 1:
+ lines.extend([f'{indent}{line}' for line in message_lines[1:]])
return '\n'.join(lines)
-class ColoredMultilineFormater(MultilineFormater):
- """Formatter that adds ANSI colors to multi-line log messages."""
+class ColoredMultilineFormatter(MultilineFormatter):
+ """Formatter that adds ANSI colors to multi-line log messages.
+
+ Args:
+ fmt: Log message format string.
+ datefmt: Date/time format string.
+ colors: Dictionary of ANSI color codes for each log level.
+ show_logger_name: Show logger name in log messages.
+ """
RESET = '\033[0m'
- def __init__(self, fmt=None, datefmt=None, colors=None):
- super().__init__(fmt=fmt, datefmt=datefmt)
+ def __init__(
+ self,
+ fmt: str | None = None,
+ datefmt: str | None = None,
+ colors: dict[str, str] | None = None,
+ show_logger_name: bool = False,
+ ):
+ super().__init__(fmt=fmt, datefmt=datefmt, show_logger_name=show_logger_name)
self.COLORS = (
colors
if colors is not None
else {
- 'DEBUG': '\033[32m',
- 'INFO': '\033[34m',
+ 'DEBUG': '\033[90m',
+ 'INFO': '\033[0m',
'WARNING': '\033[33m',
'ERROR': '\033[31m',
'CRITICAL': '\033[1m\033[31m',
@@ -377,18 +398,22 @@ def format(self, record):
def _create_console_handler(
use_rich: bool = False,
+ stream: Literal['stdout', 'stderr'] = 'stdout',
console_width: int = 120,
show_path: bool = False,
+ show_logger_name: bool = False,
date_format: str = '%Y-%m-%d %H:%M:%S',
format: str = '%(message)s',
colors: dict[str, str] | None = None,
) -> logging.Handler:
- """Create a console (stdout) logging handler.
+ """Create a console logging handler.
Args:
use_rich: If True, use RichHandler with color support.
+ stream: Output stream
console_width: Width of the console for Rich handler.
show_path: Show file paths in log messages (Rich only).
+ show_logger_name: Show logger name in log messages.
date_format: Date/time format string.
format: Log message format string.
colors: Dictionary of ANSI color codes for each log level.
@@ -396,6 +421,9 @@ def _create_console_handler(
Returns:
Configured logging handler (RichHandler or StreamHandler).
"""
+ # Determine the stream object
+ stream_obj = sys.stdout if stream == 'stdout' else sys.stderr
+
if use_rich:
# Convert ANSI codes to Rich theme
if colors:
@@ -413,7 +441,7 @@ def _create_console_handler(
else:
theme = None
- console = Console(width=console_width, theme=theme)
+ console = Console(width=console_width, theme=theme, file=stream_obj)
handler = RichHandler(
console=console,
rich_tracebacks=True,
@@ -423,8 +451,15 @@ def _create_console_handler(
)
handler.setFormatter(logging.Formatter(format))
else:
- handler = logging.StreamHandler()
- handler.setFormatter(ColoredMultilineFormater(fmt=format, datefmt=date_format, colors=colors))
+ handler = logging.StreamHandler(stream=stream_obj)
+ handler.setFormatter(
+ ColoredMultilineFormatter(
+ fmt=format,
+ datefmt=date_format,
+ colors=colors,
+ show_logger_name=show_logger_name,
+ )
+ )
return handler
@@ -433,6 +468,7 @@ def _create_file_handler(
log_file: str,
max_file_size: int = 10_485_760,
backup_count: int = 5,
+ show_logger_name: bool = False,
date_format: str = '%Y-%m-%d %H:%M:%S',
format: str = '%(message)s',
) -> RotatingFileHandler:
@@ -442,19 +478,41 @@ def _create_file_handler(
log_file: Path to the log file.
max_file_size: Maximum size in bytes before rotation.
backup_count: Number of backup files to keep.
+ show_logger_name: Show logger name in log messages.
date_format: Date/time format string.
format: Log message format string.
Returns:
Configured RotatingFileHandler (without colors).
"""
- handler = RotatingFileHandler(
- log_file,
- maxBytes=max_file_size,
- backupCount=backup_count,
- encoding='utf-8',
+
+ # Ensure parent directory exists
+ log_path = Path(log_file)
+ try:
+ log_path.parent.mkdir(parents=True, exist_ok=True)
+ except PermissionError as e:
+ raise PermissionError(f"Cannot create log directory '{log_path.parent}': Permission denied") from e
+
+ try:
+ handler = RotatingFileHandler(
+ log_file,
+ maxBytes=max_file_size,
+ backupCount=backup_count,
+ encoding='utf-8',
+ )
+ except PermissionError as e:
+ raise PermissionError(
+ f"Cannot write to log file '{log_file}': Permission denied. "
+ f'Choose a different location or check file permissions.'
+ ) from e
+
+ handler.setFormatter(
+ MultilineFormatter(
+ fmt=format,
+ datefmt=date_format,
+ show_logger_name=show_logger_name,
+ )
)
- handler.setFormatter(MultilineFormater(fmt=format, datefmt=date_format))
return handler
@@ -462,15 +520,16 @@ def _setup_logging(
default_level: Literal['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'] = 'INFO',
log_file: str | None = None,
use_rich_handler: bool = False,
- console: bool = False,
+ console: bool | Literal['stdout', 'stderr'] = False,
max_file_size: int = 10_485_760,
backup_count: int = 5,
date_format: str = '%Y-%m-%d %H:%M:%S',
format: str = '%(message)s',
console_width: int = 120,
show_path: bool = False,
+ show_logger_name: bool = False,
colors: dict[str, str] | None = None,
-):
+) -> None:
"""Internal function to setup logging - use CONFIG.apply() instead.
Configures the flixopt logger with console and/or file handlers.
@@ -487,6 +546,7 @@ def _setup_logging(
format: Log message format string.
console_width: Console width for Rich handler.
show_path: Show file paths in log messages (Rich only).
+ show_logger_name: Show logger name in log messages.
colors: ANSI color codes for each log level.
"""
logger = logging.getLogger('flixopt')
@@ -494,12 +554,17 @@ def _setup_logging(
logger.propagate = False # Prevent duplicate logs
logger.handlers.clear()
+ # Handle console parameter: False = disabled, True = stdout, 'stdout' = stdout, 'stderr' = stderr
if console:
+ # Convert True to 'stdout', keep 'stdout'/'stderr' as-is
+ stream = 'stdout' if console is True else console
logger.addHandler(
_create_console_handler(
use_rich=use_rich_handler,
+ stream=stream,
console_width=console_width,
show_path=show_path,
+ show_logger_name=show_logger_name,
date_format=date_format,
format=format,
colors=colors,
@@ -512,6 +577,7 @@ def _setup_logging(
log_file=log_file,
max_file_size=max_file_size,
backup_count=backup_count,
+ show_logger_name=show_logger_name,
date_format=date_format,
format=format,
)
@@ -521,28 +587,22 @@ def _setup_logging(
if not logger.handlers:
logger.addHandler(logging.NullHandler())
- return logger
-
def change_logging_level(level_name: Literal['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']):
- """
- Change the logging level for the flixopt logger and all its handlers.
+ """Change the logging level for the flixopt logger and all its handlers.
.. deprecated:: 2.1.11
Use ``CONFIG.Logging.level = level_name`` and ``CONFIG.apply()`` instead.
This function will be removed in version 3.0.0.
- Parameters
- ----------
- level_name : {'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'}
- The logging level to set.
-
- Examples
- --------
- >>> change_logging_level('DEBUG') # deprecated
- >>> # Use this instead:
- >>> CONFIG.Logging.level = 'DEBUG'
- >>> CONFIG.apply()
+ Args:
+ level_name: The logging level to set.
+
+ Examples:
+ >>> change_logging_level('DEBUG') # deprecated
+ >>> # Use this instead:
+ >>> CONFIG.Logging.level = 'DEBUG'
+ >>> CONFIG.apply()
"""
warnings.warn(
'change_logging_level is deprecated and will be removed in version 3.0.0. '
diff --git a/flixopt/core.py b/flixopt/core.py
index 532792e63..c163de554 100644
--- a/flixopt/core.py
+++ b/flixopt/core.py
@@ -3,13 +3,10 @@
It provides Datatypes, logging functionality, and some functions to transform data structures.
"""
-import inspect
-import json
import logging
-import pathlib
-from collections import Counter
-from collections.abc import Iterator
-from typing import Any, Literal, Optional
+import warnings
+from itertools import permutations
+from typing import Literal, Union
import numpy as np
import pandas as pd
@@ -18,10 +15,16 @@
logger = logging.getLogger('flixopt')
Scalar = int | float
-"""A type representing a single number, either integer or float."""
+"""A single number, either integer or float."""
-NumericData = int | float | np.integer | np.floating | np.ndarray | pd.Series | pd.DataFrame | xr.DataArray
-"""Represents any form of numeric data, from simple scalars to complex data structures."""
+PeriodicDataUser = int | float | np.integer | np.floating | np.ndarray | pd.Series | pd.DataFrame | xr.DataArray
+"""User data which has no time dimension. Internally converted to a Scalar or an xr.DataArray without a time dimension."""
+
+PeriodicData = xr.DataArray
+"""Internally used datatypes for periodic data."""
+
+FlowSystemDimensions = Literal['time', 'period', 'scenario']
+"""Possible dimensions of a FlowSystem."""
class PlausibilityError(Exception):
@@ -36,948 +39,607 @@ class ConversionError(Exception):
pass
-class DataConverter:
- """
- Converts various data types into xarray.DataArray with a timesteps index.
-
- Supports: scalars, arrays, Series, DataFrames, and DataArrays.
- """
-
- @staticmethod
- def as_dataarray(data: NumericData, timesteps: pd.DatetimeIndex) -> xr.DataArray:
- """Convert data to xarray.DataArray with specified timesteps index."""
- if not isinstance(timesteps, pd.DatetimeIndex) or len(timesteps) == 0:
- raise ValueError(f'Timesteps must be a non-empty DatetimeIndex, got {type(timesteps).__name__}')
- if not timesteps.name == 'time':
- raise ConversionError(f'DatetimeIndex is not named correctly. Must be named "time", got {timesteps.name=}')
-
- coords = [timesteps]
- dims = ['time']
- expected_shape = (len(timesteps),)
-
- try:
- if isinstance(data, (int, float, np.integer, np.floating)):
- return xr.DataArray(data, coords=coords, dims=dims)
- elif isinstance(data, pd.DataFrame):
- if not data.index.equals(timesteps):
- raise ConversionError(
- f"DataFrame index doesn't match timesteps index. "
- f'Its missing the following time steps: {timesteps.difference(data.index)}. '
- f'Some parameters might need an extra timestep at the end.'
- )
- if not len(data.columns) == 1:
- raise ConversionError('DataFrame must have exactly one column')
- return xr.DataArray(data.values.flatten(), coords=coords, dims=dims)
- elif isinstance(data, pd.Series):
- if not data.index.equals(timesteps):
- raise ConversionError(
- f"Series index doesn't match timesteps index. "
- f'Its missing the following time steps: {timesteps.difference(data.index)}. '
- f'Some parameters might need an extra timestep at the end.'
- )
- return xr.DataArray(data.values, coords=coords, dims=dims)
- elif isinstance(data, np.ndarray):
- if data.ndim != 1:
- raise ConversionError(f'Array must be 1-dimensional, got {data.ndim}')
- elif data.shape[0] != expected_shape[0]:
- raise ConversionError(f"Array shape {data.shape} doesn't match expected {expected_shape}")
- return xr.DataArray(data, coords=coords, dims=dims)
- elif isinstance(data, xr.DataArray):
- if data.dims != tuple(dims):
- raise ConversionError(f"DataArray dimensions {data.dims} don't match expected {dims}")
- if data.sizes[dims[0]] != len(coords[0]):
- raise ConversionError(
- f"DataArray length {data.sizes[dims[0]]} doesn't match expected {len(coords[0])}"
- )
- return data.copy(deep=True)
- else:
- raise ConversionError(f'Unsupported type: {type(data).__name__}')
- except Exception as e:
- if isinstance(e, ConversionError):
- raise
- raise ConversionError(f'Converting data {type(data)} to xarray.Dataset raised an error: {str(e)}') from e
-
-
-class TimeSeriesData:
- """
- TimeSeriesData wraps time series data with aggregation metadata for optimization.
-
- This class combines time series data with special characteristics needed for aggregated calculations.
- It allows grouping related time series to prevent overweighting in optimization models.
-
- Example:
- When you have multiple solar time series, they should share aggregation weight:
- ```python
- solar1 = TimeSeriesData(sol_array_1, agg_group='solar')
- solar2 = TimeSeriesData(sol_array_2, agg_group='solar')
- solar3 = TimeSeriesData(sol_array_3, agg_group='solar')
- # These 3 series share one weight (each gets weight = 1/3 instead of 1)
- ```
-
- Args:
- data: The timeseries data, which can be a scalar, array, or numpy array.
- agg_group: The group this TimeSeriesData belongs to. agg_weight is split between group members. Default is None.
- agg_weight: The weight for calculation_type 'aggregated', should be between 0 and 1. Default is None.
-
- Raises:
- ValueError: If both agg_group and agg_weight are set.
- """
-
- # TODO: Move to Interface.py
- def __init__(self, data: NumericData, agg_group: str | None = None, agg_weight: float | None = None):
- self.data = data
- self.agg_group = agg_group
- self.agg_weight = agg_weight
- if (agg_group is not None) and (agg_weight is not None):
- raise ValueError('Either or explicit can be used. Not both!')
- self.label: str | None = None
-
- def __repr__(self):
- # Get the constructor arguments and their current values
- init_signature = inspect.signature(self.__init__)
- init_args = init_signature.parameters
-
- # Create a dictionary with argument names and their values
- args_str = ', '.join(f'{name}={repr(getattr(self, name, None))}' for name in init_args if name != 'self')
- return f'{self.__class__.__name__}({args_str})'
-
- def __str__(self):
- return str(self.data)
+class TimeSeriesData(xr.DataArray):
+ """Minimal TimeSeriesData that inherits from xr.DataArray with aggregation metadata."""
-
-NumericDataTS = NumericData | TimeSeriesData
-"""Represents either standard numeric data or TimeSeriesData."""
-
-
-class TimeSeries:
- """
- A class representing time series data with active and stored states.
-
- TimeSeries provides a way to store time-indexed data and work with temporal subsets.
- It supports arithmetic operations, aggregation, and JSON serialization.
-
- Attributes:
- name (str): The name of the time series
- aggregation_weight (Optional[float]): Weight used for aggregation
- aggregation_group (Optional[str]): Group name for shared aggregation weighting
- needs_extra_timestep (bool): Whether this series needs an extra timestep
- """
-
- @classmethod
- def from_datasource(
- cls,
- data: NumericData,
- name: str,
- timesteps: pd.DatetimeIndex,
- aggregation_weight: float | None = None,
- aggregation_group: str | None = None,
- needs_extra_timestep: bool = False,
- ) -> 'TimeSeries':
- """
- Initialize the TimeSeries from multiple data sources.
-
- Args:
- data: The time series data
- name: The name of the TimeSeries
- timesteps: The timesteps of the TimeSeries
- aggregation_weight: The weight in aggregation calculations
- aggregation_group: Group this TimeSeries belongs to for aggregation weight sharing
- needs_extra_timestep: Whether this series requires an extra timestep
-
- Returns:
- A new TimeSeries instance
- """
- return cls(
- DataConverter.as_dataarray(data, timesteps),
- name,
- aggregation_weight,
- aggregation_group,
- needs_extra_timestep,
- )
-
- @classmethod
- def from_json(cls, data: dict[str, Any] | None = None, path: str | None = None) -> 'TimeSeries':
- """
- Load a TimeSeries from a dictionary or json file.
-
- Args:
- data: Dictionary containing TimeSeries data
- path: Path to a JSON file containing TimeSeries data
-
- Returns:
- A new TimeSeries instance
-
- Raises:
- ValueError: If both path and data are provided or neither is provided
- """
- if (path is None and data is None) or (path is not None and data is not None):
- raise ValueError("Exactly one of 'path' or 'data' must be provided")
-
- if path is not None:
- with open(path) as f:
- data = json.load(f)
-
- # Convert ISO date strings to datetime objects
- data['data']['coords']['time']['data'] = pd.to_datetime(data['data']['coords']['time']['data'])
-
- # Create the TimeSeries instance
- return cls(
- data=xr.DataArray.from_dict(data['data']),
- name=data['name'],
- aggregation_weight=data['aggregation_weight'],
- aggregation_group=data['aggregation_group'],
- needs_extra_timestep=data['needs_extra_timestep'],
- )
+ __slots__ = () # No additional instance attributes - everything goes in attrs
def __init__(
self,
- data: xr.DataArray,
- name: str,
- aggregation_weight: float | None = None,
+ *args,
aggregation_group: str | None = None,
- needs_extra_timestep: bool = False,
+ aggregation_weight: float | None = None,
+ agg_group: str | None = None,
+ agg_weight: float | None = None,
+ **kwargs,
):
"""
- Initialize a TimeSeries with a DataArray.
-
- Args:
- data: The DataArray containing time series data
- name: The name of the TimeSeries
- aggregation_weight: The weight in aggregation calculations
- aggregation_group: Group this TimeSeries belongs to for weight sharing
- needs_extra_timestep: Whether this series requires an extra timestep
-
- Raises:
- ValueError: If data doesn't have a 'time' index or has more than 1 dimension
- """
- if 'time' not in data.indexes:
- raise ValueError(f'DataArray must have a "time" index. Got {data.indexes}')
- if data.ndim > 1:
- raise ValueError(f'Number of dimensions of DataArray must be 1. Got {data.ndim}')
-
- self.name = name
- self.aggregation_weight = aggregation_weight
- self.aggregation_group = aggregation_group
- self.needs_extra_timestep = needs_extra_timestep
-
- # Data management
- self._stored_data = data.copy(deep=True)
- self._backup = self._stored_data.copy(deep=True)
- self._active_timesteps = self._stored_data.indexes['time']
- self._active_data = None
- self._update_active_data()
-
- def reset(self):
- """
- Reset active timesteps to the full set of stored timesteps.
- """
- self.active_timesteps = None
-
- def restore_data(self):
- """
- Restore stored_data from the backup and reset active timesteps.
- """
- self._stored_data = self._backup.copy(deep=True)
- self.reset()
-
- def to_json(self, path: pathlib.Path | None = None) -> dict[str, Any]:
- """
- Save the TimeSeries to a dictionary or JSON file.
-
Args:
- path: Optional path to save JSON file
-
- Returns:
- Dictionary representation of the TimeSeries
- """
- data = {
- 'name': self.name,
- 'aggregation_weight': self.aggregation_weight,
- 'aggregation_group': self.aggregation_group,
- 'needs_extra_timestep': self.needs_extra_timestep,
- 'data': self.active_data.to_dict(),
- }
-
- # Convert datetime objects to ISO strings
- data['data']['coords']['time']['data'] = [date.isoformat() for date in data['data']['coords']['time']['data']]
-
- # Save to file if path is provided
- if path is not None:
- indent = 4 if len(self.active_timesteps) <= 480 else None
- with open(path, 'w', encoding='utf-8') as f:
- json.dump(data, f, indent=indent, ensure_ascii=False)
-
- return data
-
- @property
- def stats(self) -> str:
- """
- Return a statistical summary of the active data.
-
- Returns:
- String representation of data statistics
- """
- return get_numeric_stats(self.active_data, padd=0)
-
- def _update_active_data(self):
- """
- Update the active data based on active_timesteps.
- """
- self._active_data = self._stored_data.sel(time=self.active_timesteps)
+ *args: Arguments passed to DataArray
+ aggregation_group: Aggregation group name
+ aggregation_weight: Aggregation weight (0-1)
+ agg_group: Deprecated, use aggregation_group instead
+ agg_weight: Deprecated, use aggregation_weight instead
+ **kwargs: Additional arguments passed to DataArray
+ """
+ if agg_group is not None:
+ warnings.warn('agg_group is deprecated, use aggregation_group instead', DeprecationWarning, stacklevel=2)
+ aggregation_group = agg_group
+ if agg_weight is not None:
+ warnings.warn('agg_weight is deprecated, use aggregation_weight instead', DeprecationWarning, stacklevel=2)
+ aggregation_weight = agg_weight
+
+ if (aggregation_group is not None) and (aggregation_weight is not None):
+ raise ValueError('Use either aggregation_group or aggregation_weight, not both')
+
+ # Let xarray handle all the initialization complexity
+ super().__init__(*args, **kwargs)
+
+ # Add our metadata to attrs after initialization
+ if aggregation_group is not None:
+ self.attrs['aggregation_group'] = aggregation_group
+ if aggregation_weight is not None:
+ self.attrs['aggregation_weight'] = aggregation_weight
+
+ # Always mark as TimeSeriesData
+ self.attrs['__timeseries_data__'] = True
+
+ def fit_to_coords(
+ self,
+ coords: dict[str, pd.Index],
+ name: str | None = None,
+ ) -> 'TimeSeriesData':
+ """Fit the data to the given coordinates. Returns a new TimeSeriesData object if the current coords are different."""
+ if self.coords.equals(xr.Coordinates(coords)):
+ return self
+
+ da = DataConverter.to_dataarray(self.data, coords=coords)
+ return self.__class__(
+ da,
+ aggregation_group=self.aggregation_group,
+ aggregation_weight=self.aggregation_weight,
+ name=name if name is not None else self.name,
+ )
@property
- def all_equal(self) -> bool:
- """Check if all values in the series are equal."""
- return np.unique(self.active_data.values).size == 1
+ def aggregation_group(self) -> str | None:
+ return self.attrs.get('aggregation_group')
@property
- def active_timesteps(self) -> pd.DatetimeIndex:
- """Get the current active timesteps."""
- return self._active_timesteps
-
- @active_timesteps.setter
- def active_timesteps(self, timesteps: pd.DatetimeIndex | None):
- """
- Set active_timesteps and refresh active_data.
+ def aggregation_weight(self) -> float | None:
+ return self.attrs.get('aggregation_weight')
- Args:
- timesteps: New timesteps to activate, or None to use all stored timesteps
-
- Raises:
- TypeError: If timesteps is not a pandas DatetimeIndex or None
- """
- if timesteps is None:
- self._active_timesteps = self.stored_data.indexes['time']
- elif isinstance(timesteps, pd.DatetimeIndex):
- self._active_timesteps = timesteps
- else:
- raise TypeError('active_timesteps must be a pandas DatetimeIndex or None')
-
- self._update_active_data()
-
- @property
- def active_data(self) -> xr.DataArray:
- """Get a view of stored_data based on active_timesteps."""
- return self._active_data
-
- @property
- def stored_data(self) -> xr.DataArray:
- """Get a copy of the full stored data."""
- return self._stored_data.copy()
+ @classmethod
+ def from_dataarray(
+ cls, da: xr.DataArray, aggregation_group: str | None = None, aggregation_weight: float | None = None
+ ):
+ """Create TimeSeriesData from DataArray, extracting metadata from attrs."""
+ # Get aggregation metadata from attrs or parameters
+ final_aggregation_group = (
+ aggregation_group if aggregation_group is not None else da.attrs.get('aggregation_group')
+ )
+ final_aggregation_weight = (
+ aggregation_weight if aggregation_weight is not None else da.attrs.get('aggregation_weight')
+ )
- @stored_data.setter
- def stored_data(self, value: NumericData):
- """
- Update stored_data and refresh active_data.
+ return cls(da, aggregation_group=final_aggregation_group, aggregation_weight=final_aggregation_weight)
- Args:
- value: New data to store
- """
- new_data = DataConverter.as_dataarray(value, timesteps=self.active_timesteps)
+ @classmethod
+ def is_timeseries_data(cls, obj) -> bool:
+ """Check if an object is TimeSeriesData."""
+ return isinstance(obj, xr.DataArray) and obj.attrs.get('__timeseries_data__', False)
- # Skip if data is unchanged to avoid overwriting backup
- if new_data.equals(self._stored_data):
- return
+ def __repr__(self):
+ agg_info = []
+ if self.aggregation_group:
+ agg_info.append(f"aggregation_group='{self.aggregation_group}'")
+ if self.aggregation_weight is not None:
+ agg_info.append(f'aggregation_weight={self.aggregation_weight}')
- self._stored_data = new_data
- self.active_timesteps = None # Reset to full timeline
+ info_str = f'TimeSeriesData({", ".join(agg_info)})' if agg_info else 'TimeSeriesData'
+ return f'{info_str}\n{super().__repr__()}'
@property
- def sel(self):
- return self.active_data.sel
+ def agg_group(self):
+ warnings.warn('agg_group is deprecated, use aggregation_group instead', DeprecationWarning, stacklevel=2)
+ return self.aggregation_group
@property
- def isel(self):
- return self.active_data.isel
-
- def _apply_operation(self, other, op):
- """Apply an operation between this TimeSeries and another object."""
- if isinstance(other, TimeSeries):
- other = other.active_data
- return op(self.active_data, other)
-
- def __add__(self, other):
- return self._apply_operation(other, lambda x, y: x + y)
-
- def __sub__(self, other):
- return self._apply_operation(other, lambda x, y: x - y)
-
- def __mul__(self, other):
- return self._apply_operation(other, lambda x, y: x * y)
-
- def __truediv__(self, other):
- return self._apply_operation(other, lambda x, y: x / y)
+ def agg_weight(self):
+ warnings.warn('agg_weight is deprecated, use aggregation_weight instead', DeprecationWarning, stacklevel=2)
+ return self.aggregation_weight
- def __radd__(self, other):
- return other + self.active_data
- def __rsub__(self, other):
- return other - self.active_data
+TemporalDataUser = (
+ int | float | np.integer | np.floating | np.ndarray | pd.Series | pd.DataFrame | xr.DataArray | TimeSeriesData
+)
+"""User data which might have a time dimension. Internally converted to an xr.DataArray with time dimension."""
- def __rmul__(self, other):
- return other * self.active_data
+TemporalData = xr.DataArray | TimeSeriesData
+"""Internally used datatypes for temporal data (data with a time dimension)."""
- def __rtruediv__(self, other):
- return other / self.active_data
- def __neg__(self) -> xr.DataArray:
- return -self.active_data
-
- def __pos__(self) -> xr.DataArray:
- return +self.active_data
-
- def __abs__(self) -> xr.DataArray:
- return abs(self.active_data)
-
- def __gt__(self, other):
- """
- Compare if this TimeSeries is greater than another.
-
- Args:
- other: Another TimeSeries to compare with
+class DataConverter:
+ """
+ Converts various data types into xarray.DataArray with specified target coordinates.
+
+ This converter handles intelligent dimension matching and broadcasting to ensure
+ the output DataArray always conforms to the specified coordinate structure.
+
+ Supported input types:
+ - Scalars: int, float, np.number (broadcast to all target dimensions)
+ - 1D data: np.ndarray, pd.Series, single-column DataFrame (matched by length/index)
+ - Multi-dimensional arrays: np.ndarray, DataFrame (matched by shape)
+ - xr.DataArray: validated and potentially broadcast to target dimensions
+
+ The converter uses smart matching strategies:
+ - Series: matched by exact index comparison
+ - 1D arrays: matched by length to target coordinates
+ - Multi-dimensional arrays: matched by shape permutation analysis
+ - DataArrays: validated for compatibility and broadcast as needed
+ """
- Returns:
- True if all values in this TimeSeries are greater than other
+ @staticmethod
+ def _match_series_by_index_alignment(
+ data: pd.Series, target_coords: dict[str, pd.Index], target_dims: tuple[str, ...]
+ ) -> xr.DataArray:
"""
- if isinstance(other, TimeSeries):
- return self.active_data > other.active_data
- return self.active_data > other
+ Match pandas Series to target dimension by exact index comparison.
- def __ge__(self, other):
- """
- Compare if this TimeSeries is greater than or equal to another.
+ Attempts to find a target dimension whose coordinates exactly match
+ the Series index values, ensuring proper alignment.
Args:
- other: Another TimeSeries to compare with
+ data: pandas Series to convert
+ target_coords: Available target coordinates {dim_name: coordinate_index}
+ target_dims: Target dimension names to consider for matching
Returns:
- True if all values in this TimeSeries are greater than or equal to other
- """
- if isinstance(other, TimeSeries):
- return self.active_data >= other.active_data
- return self.active_data >= other
-
- def __lt__(self, other):
- """
- Compare if this TimeSeries is less than another.
+ DataArray with Series matched to the appropriate dimension
- Args:
- other: Another TimeSeries to compare with
+ Raises:
+ ConversionError: If Series cannot be matched to any target dimension,
+ or if no target dimensions provided for multi-element Series
+ """
+ # Handle edge case: no target dimensions
+ if len(target_dims) == 0:
+ if len(data) != 1:
+ raise ConversionError(
+ f'Cannot convert multi-element Series without target dimensions. '
+ f'Series has {len(data)} elements but no target dimensions specified.'
+ )
+ return xr.DataArray(data.iloc[0])
+
+ # Attempt exact index matching with each target dimension
+ for dim_name in target_dims:
+ target_index = target_coords[dim_name]
+ if data.index.equals(target_index):
+ return xr.DataArray(data.values.copy(), coords={dim_name: target_index}, dims=dim_name)
+
+ # No exact matches found
+ available_lengths = {dim: len(target_coords[dim]) for dim in target_dims}
+ raise ConversionError(
+ f'Series index does not match any target dimension coordinates. '
+ f'Series length: {len(data)}, available coordinate lengths: {available_lengths}'
+ )
- Returns:
- True if all values in this TimeSeries are less than other
+ @staticmethod
+ def _match_1d_array_by_length(
+ data: np.ndarray, target_coords: dict[str, pd.Index], target_dims: tuple[str, ...]
+ ) -> xr.DataArray:
"""
- if isinstance(other, TimeSeries):
- return self.active_data < other.active_data
- return self.active_data < other
+ Match 1D numpy array to target dimension by length comparison.
- def __le__(self, other):
- """
- Compare if this TimeSeries is less than or equal to another.
+ Finds target dimensions whose coordinate length matches the array length.
+ Requires unique length match to avoid ambiguity.
Args:
- other: Another TimeSeries to compare with
+ data: 1D numpy array to convert
+ target_coords: Available target coordinates {dim_name: coordinate_index}
+ target_dims: Target dimension names to consider for matching
Returns:
- True if all values in this TimeSeries are less than or equal to other
- """
- if isinstance(other, TimeSeries):
- return self.active_data <= other.active_data
- return self.active_data <= other
-
- def __eq__(self, other):
- """
- Compare if this TimeSeries is equal to another.
+ DataArray with array matched to the uniquely identified dimension
- Args:
- other: Another TimeSeries to compare with
+ Raises:
+ ConversionError: If array length matches zero or multiple target dimensions,
+ or if no target dimensions provided for multi-element array
+ """
+ # Handle edge case: no target dimensions
+ if len(target_dims) == 0:
+ if len(data) != 1:
+ raise ConversionError(
+ f'Cannot convert multi-element array without target dimensions. Array has {len(data)} elements.'
+ )
+ return xr.DataArray(data[0])
+
+ # Find all dimensions with matching lengths
+ array_length = len(data)
+ matching_dims = []
+ coordinate_lengths = {}
+
+ for dim_name in target_dims:
+ coord_length = len(target_coords[dim_name])
+ coordinate_lengths[dim_name] = coord_length
+ if array_length == coord_length:
+ matching_dims.append(dim_name)
+
+ # Validate matching results
+ if len(matching_dims) == 0:
+ raise ConversionError(
+ f'Array length {array_length} does not match any target dimension lengths: {coordinate_lengths}'
+ )
+ elif len(matching_dims) > 1:
+ raise ConversionError(
+ f'Array length {array_length} matches multiple dimensions: {matching_dims}. '
+ f'Cannot uniquely determine target dimension. Consider using explicit '
+ f'dimension specification or converting to DataArray manually.'
+ )
- Returns:
- True if all values in this TimeSeries are equal to other
- """
- if isinstance(other, TimeSeries):
- return self.active_data == other.active_data
- return self.active_data == other
+ # Create DataArray with the uniquely matched dimension
+ matched_dim = matching_dims[0]
+ return xr.DataArray(data.copy(), coords={matched_dim: target_coords[matched_dim]}, dims=matched_dim)
- def __array_ufunc__(self, ufunc, method, *inputs, **kwargs):
+ @staticmethod
+ def _match_multidim_array_by_shape_permutation(
+ data: np.ndarray, target_coords: dict[str, pd.Index], target_dims: tuple[str, ...]
+ ) -> xr.DataArray:
"""
- Handle NumPy universal functions.
+ Match multi-dimensional array to target dimensions using shape permutation analysis.
- This allows NumPy functions to work with TimeSeries objects.
- """
- # Convert any TimeSeries inputs to their active_data
- inputs = [x.active_data if isinstance(x, TimeSeries) else x for x in inputs]
- return getattr(ufunc, method)(*inputs, **kwargs)
+ Analyzes all possible mappings between array shape and target coordinate lengths
+ to find the unique valid dimension assignment.
- def __repr__(self):
- """
- Get a string representation of the TimeSeries.
+ Args:
+ data: Multi-dimensional numpy array to convert
+ target_coords: Available target coordinates {dim_name: coordinate_index}
+ target_dims: Target dimension names to consider for matching
Returns:
- String showing TimeSeries details
- """
- attrs = {
- 'name': self.name,
- 'aggregation_weight': self.aggregation_weight,
- 'aggregation_group': self.aggregation_group,
- 'needs_extra_timestep': self.needs_extra_timestep,
- 'shape': self.active_data.shape,
- 'time_range': f'{self.active_timesteps[0]} to {self.active_timesteps[-1]}',
- }
- attr_str = ', '.join(f'{k}={repr(v)}' for k, v in attrs.items())
- return f'TimeSeries({attr_str})'
-
- def __str__(self):
- """
- Get a human-readable string representation.
+ DataArray with array dimensions mapped to target dimensions by shape
- Returns:
- Descriptive string with statistics
- """
- return f"TimeSeries '{self.name}': {self.stats}"
+ Raises:
+ ConversionError: If array shape cannot be uniquely mapped to target dimensions,
+ or if no target dimensions provided for multi-element array
+ """
+ # Handle edge case: no target dimensions
+ if len(target_dims) == 0:
+ if data.size != 1:
+ raise ConversionError(
+ f'Cannot convert multi-element array without target dimensions. '
+ f'Array has {data.size} elements with shape {data.shape}.'
+ )
+ return xr.DataArray(data.item())
+
+ array_shape = data.shape
+ coordinate_lengths = {dim: len(target_coords[dim]) for dim in target_dims}
+
+ # Find all valid dimension permutations that match the array shape
+ valid_mappings = []
+ for dim_permutation in permutations(target_dims, data.ndim):
+ shape_matches = all(
+ array_shape[i] == coordinate_lengths[dim_permutation[i]] for i in range(len(dim_permutation))
+ )
+ if shape_matches:
+ valid_mappings.append(dim_permutation)
+
+ # Validate mapping results
+ if len(valid_mappings) == 0:
+ raise ConversionError(
+ f'Array shape {array_shape} cannot be mapped to any combination of target '
+ f'coordinate lengths: {coordinate_lengths}. Consider reshaping the array '
+ f'or adjusting target coordinates.'
+ )
+ if len(valid_mappings) > 1:
+ raise ConversionError(
+ f'Array shape {array_shape} matches multiple dimension combinations: '
+ f'{valid_mappings}. Cannot uniquely determine dimension mapping. '
+ f'Consider using explicit dimension specification.'
+ )
-class TimeSeriesCollection:
- """
- Collection of TimeSeries objects with shared timestep management.
+ # Create DataArray with the uniquely determined mapping
+ matched_dims = valid_mappings[0]
+ matched_coords = {dim: target_coords[dim] for dim in matched_dims}
- TimeSeriesCollection handles multiple TimeSeries objects with synchronized
- timesteps, provides operations on collections, and manages extra timesteps.
- """
+ return xr.DataArray(data.copy(), coords=matched_coords, dims=matched_dims)
- def __init__(
- self,
- timesteps: pd.DatetimeIndex,
- hours_of_last_timestep: float | None = None,
- hours_of_previous_timesteps: float | np.ndarray | None = None,
- ):
- """
- Args:
- timesteps: The timesteps of the Collection.
- hours_of_last_timestep: The duration of the last time step. Uses the last time interval if not specified
- hours_of_previous_timesteps: The duration of previous timesteps.
- If None, the first time increment of time_series is used.
- This is needed to calculate previous durations (for example consecutive_on_hours).
- If you use an array, take care that its long enough to cover all previous values!
+ @staticmethod
+ def _broadcast_dataarray_to_target_specification(
+ source_data: xr.DataArray, target_coords: dict[str, pd.Index], target_dims: tuple[str, ...]
+ ) -> xr.DataArray:
"""
- # Prepare and validate timesteps
- self._validate_timesteps(timesteps)
- self.hours_of_previous_timesteps = self._calculate_hours_of_previous_timesteps(
- timesteps, hours_of_previous_timesteps
- )
-
- # Set up timesteps and hours
- self.all_timesteps = timesteps
- self.all_timesteps_extra = self._create_timesteps_with_extra(timesteps, hours_of_last_timestep)
- self.all_hours_per_timestep = self.calculate_hours_per_timestep(self.all_timesteps_extra)
-
- # Active timestep tracking
- self._active_timesteps = None
- self._active_timesteps_extra = None
- self._active_hours_per_timestep = None
-
- # Dictionary of time series by name
- self.time_series_data: dict[str, TimeSeries] = {}
+ Broadcast DataArray to conform to target coordinate and dimension specification.
- # Aggregation
- self.group_weights: dict[str, float] = {}
- self.weights: dict[str, float] = {}
-
- @classmethod
- def with_uniform_timesteps(
- cls, start_time: pd.Timestamp, periods: int, freq: str, hours_per_step: float | None = None
- ) -> 'TimeSeriesCollection':
- """Create a collection with uniform timesteps."""
- timesteps = pd.date_range(start_time, periods=periods, freq=freq, name='time')
- return cls(timesteps, hours_of_previous_timesteps=hours_per_step)
-
- def create_time_series(
- self, data: NumericData | TimeSeriesData, name: str, needs_extra_timestep: bool = False
- ) -> TimeSeries:
- """
- Creates a TimeSeries from the given data and adds it to the collection.
+ Performs comprehensive validation and broadcasting to ensure the result exactly
+ matches the target specification. Handles scalar expansion, dimension validation,
+ coordinate compatibility checking, and broadcasting to additional dimensions.
Args:
- data: The data to create the TimeSeries from.
- name: The name of the TimeSeries.
- needs_extra_timestep: Whether to create an additional timestep at the end of the timesteps.
+ source_data: Source DataArray to broadcast
+ target_coords: Target coordinates {dim_name: coordinate_index}
+ target_dims: Target dimension names in desired order
Returns:
- The created TimeSeries.
+ DataArray broadcast to target specification with proper dimension ordering
- """
- # Check for duplicate name
- if name in self.time_series_data:
- raise ValueError(f"TimeSeries '{name}' already exists in this collection")
-
- # Determine which timesteps to use
- timesteps_to_use = self.timesteps_extra if needs_extra_timestep else self.timesteps
-
- # Create the time series
- if isinstance(data, TimeSeriesData):
- time_series = TimeSeries.from_datasource(
- name=name,
- data=data.data,
- timesteps=timesteps_to_use,
- aggregation_weight=data.agg_weight,
- aggregation_group=data.agg_group,
- needs_extra_timestep=needs_extra_timestep,
- )
- # Connect the user time series to the created TimeSeries
- data.label = name
- else:
- time_series = TimeSeries.from_datasource(
- name=name, data=data, timesteps=timesteps_to_use, needs_extra_timestep=needs_extra_timestep
+ Raises:
+ ConversionError: If broadcasting is impossible due to incompatible dimensions
+ or coordinate mismatches
+ """
+ # Validate: cannot reduce dimensions
+ if len(source_data.dims) > len(target_dims):
+ raise ConversionError(
+ f'Cannot reduce DataArray dimensionality from {len(source_data.dims)} '
+ f'to {len(target_dims)} dimensions. Source dims: {source_data.dims}, '
+ f'target dims: {target_dims}'
)
- # Add to the collection
- self.add_time_series(time_series)
+ # Validate: all source dimensions must exist in target
+ missing_dims = set(source_data.dims) - set(target_dims)
+ if missing_dims:
+ raise ConversionError(
+ f'Source DataArray has dimensions {missing_dims} not present in target dimensions {target_dims}'
+ )
- return time_series
+ # Validate: coordinate compatibility for overlapping dimensions
+ for dim in source_data.dims:
+ if dim in source_data.coords and dim in target_coords:
+ source_coords = source_data.coords[dim]
+ target_coords_for_dim = target_coords[dim]
- def calculate_aggregation_weights(self) -> dict[str, float]:
- """Calculate and return aggregation weights for all time series."""
- self.group_weights = self._calculate_group_weights()
- self.weights = self._calculate_weights()
+ if not np.array_equal(source_coords.values, target_coords_for_dim.values):
+ raise ConversionError(
+ f'Coordinate mismatch for dimension "{dim}". '
+ f'Source and target coordinates have different values.'
+ )
- if np.all(np.isclose(list(self.weights.values()), 1, atol=1e-6)):
- logger.info('All Aggregation weights were set to 1')
+ # Create target template for broadcasting
+ target_shape = [len(target_coords[dim]) for dim in target_dims]
+ target_template = xr.DataArray(np.empty(target_shape), coords=target_coords, dims=target_dims)
- return self.weights
+ # Perform broadcasting and ensure proper dimension ordering
+ broadcasted = source_data.broadcast_like(target_template)
+ return broadcasted.transpose(*target_dims)
- def activate_timesteps(self, active_timesteps: pd.DatetimeIndex | None = None):
- """
- Update active timesteps for the collection and all time series.
- If no arguments are provided, the active timesteps are reset.
+ @classmethod
+ def to_dataarray(
+ cls,
+ data: int
+ | float
+ | bool
+ | np.integer
+ | np.floating
+ | np.bool_
+ | np.ndarray
+ | pd.Series
+ | pd.DataFrame
+ | xr.DataArray,
+ coords: dict[str, pd.Index] | None = None,
+ ) -> xr.DataArray:
+ """
+ Convert various data types to xarray.DataArray with specified target coordinates.
+
+ This is the main conversion method that intelligently handles different input types
+ and ensures the result conforms to the specified coordinate structure through
+ smart dimension matching and broadcasting.
Args:
- active_timesteps: The active timesteps of the model.
- If None, the all timesteps of the TimeSeriesCollection are taken.
- """
- if active_timesteps is None:
- return self.reset()
-
- if not np.all(np.isin(active_timesteps, self.all_timesteps)):
- raise ValueError('active_timesteps must be a subset of the timesteps of the TimeSeriesCollection')
-
- # Calculate derived timesteps
- self._active_timesteps = active_timesteps
- first_ts_index = np.where(self.all_timesteps == active_timesteps[0])[0][0]
- last_ts_idx = np.where(self.all_timesteps == active_timesteps[-1])[0][0]
- self._active_timesteps_extra = self.all_timesteps_extra[first_ts_index : last_ts_idx + 2]
- self._active_hours_per_timestep = self.all_hours_per_timestep.isel(time=slice(first_ts_index, last_ts_idx + 1))
+ data: Input data to convert. Supported types:
+ - Scalars: int, float, bool, np.integer, np.floating, np.bool_
+ - Arrays: np.ndarray (1D and multi-dimensional)
+ - Pandas: pd.Series, pd.DataFrame
+ - xarray: xr.DataArray
+ coords: Target coordinate specification as {dimension_name: coordinate_index}.
+ All coordinate indices must be pandas.Index objects.
- # Update all time series
- self._update_time_series_timesteps()
-
- def reset(self):
- """Reset active timesteps to defaults for all time series."""
- self._active_timesteps = None
- self._active_timesteps_extra = None
- self._active_hours_per_timestep = None
-
- for time_series in self.time_series_data.values():
- time_series.reset()
-
- def restore_data(self):
- """Restore original data for all time series."""
- for time_series in self.time_series_data.values():
- time_series.restore_data()
-
- def add_time_series(self, time_series: TimeSeries):
- """Add an existing TimeSeries to the collection."""
- if time_series.name in self.time_series_data:
- raise ValueError(f"TimeSeries '{time_series.name}' already exists in this collection")
+ Returns:
+ DataArray conforming to the target coordinate specification,
+ with input data appropriately matched and broadcast
- self.time_series_data[time_series.name] = time_series
+ Raises:
+ ConversionError: If data type is unsupported, conversion fails,
+ or broadcasting to target coordinates is impossible
+
+ Examples:
+ # Scalar broadcasting
+ >>> coords = {'x': pd.Index([1, 2, 3]), 'y': pd.Index(['a', 'b'])}
+ >>> converter.to_dataarray(42, coords)
+ # Returns: DataArray with shape (3, 2), all values = 42
+
+ # Series index matching
+ >>> series = pd.Series([10, 20, 30], index=[1, 2, 3])
+ >>> converter.to_dataarray(series, coords)
+ # Returns: DataArray matched to 'x' dimension, broadcast to 'y'
+
+ # Array shape matching
+ >>> array = np.array([[1, 2], [3, 4], [5, 6]]) # Shape (3, 2)
+ >>> converter.to_dataarray(array, coords)
+ # Returns: DataArray with dimensions ('x', 'y') based on shape
+ """
+ # Prepare and validate target specification
+ if coords is None:
+ coords = {}
+
+ validated_coords, target_dims = cls._validate_and_prepare_target_coordinates(coords)
+
+ # Convert input data to intermediate DataArray based on type
+ if isinstance(data, (int, float, bool, np.integer, np.floating, np.bool_)):
+ # Scalar values - create scalar DataArray
+ intermediate = xr.DataArray(data.item() if hasattr(data, 'item') else data)
+
+ elif isinstance(data, np.ndarray):
+ # NumPy arrays - dispatch based on dimensionality
+ if data.ndim == 0:
+ # 0-dimensional array (scalar)
+ intermediate = xr.DataArray(data.item())
+ elif data.ndim == 1:
+ # 1-dimensional array
+ intermediate = cls._match_1d_array_by_length(data, validated_coords, target_dims)
+ else:
+ # Multi-dimensional array
+ intermediate = cls._match_multidim_array_by_shape_permutation(data, validated_coords, target_dims)
+
+ elif isinstance(data, pd.Series):
+ # Pandas Series - validate and match by index
+ if isinstance(data.index, pd.MultiIndex):
+ raise ConversionError('MultiIndex Series are not supported. Please use a single-level index.')
+ intermediate = cls._match_series_by_index_alignment(data, validated_coords, target_dims)
+
+ elif isinstance(data, pd.DataFrame):
+ # Pandas DataFrame - validate and convert
+ if isinstance(data.index, pd.MultiIndex):
+ raise ConversionError('MultiIndex DataFrames are not supported. Please use a single-level index.')
+ if len(data.columns) == 0 or data.empty:
+ raise ConversionError('DataFrame must have at least one column and cannot be empty.')
+
+ if len(data.columns) == 1:
+ # Single-column DataFrame - treat as Series
+ series_data = data.iloc[:, 0]
+ intermediate = cls._match_series_by_index_alignment(series_data, validated_coords, target_dims)
+ else:
+ # Multi-column DataFrame - treat as multi-dimensional array
+ intermediate = cls._match_multidim_array_by_shape_permutation(
+ data.to_numpy(), validated_coords, target_dims
+ )
- def insert_new_data(self, data: pd.DataFrame, include_extra_timestep: bool = False):
- """
- Update time series with new data from a DataFrame.
+ elif isinstance(data, xr.DataArray):
+ # Existing DataArray - use as-is
+ intermediate = data.copy()
- Args:
- data: DataFrame containing new data with timestamps as index
- include_extra_timestep: Whether the provided data already includes the extra timestep, by default False
- """
- if not isinstance(data, pd.DataFrame):
- raise TypeError(f'data must be a pandas DataFrame, got {type(data).__name__}')
-
- # Check if the DataFrame index matches the expected timesteps
- expected_timesteps = self.timesteps_extra if include_extra_timestep else self.timesteps
- if not data.index.equals(expected_timesteps):
- raise ValueError(
- f'DataFrame index must match {"collection timesteps with extra timestep" if include_extra_timestep else "collection timesteps"}'
+ else:
+ # Unsupported data type
+ supported_types = [
+ 'int',
+ 'float',
+ 'bool',
+ 'np.integer',
+ 'np.floating',
+ 'np.bool_',
+ 'np.ndarray',
+ 'pd.Series',
+ 'pd.DataFrame',
+ 'xr.DataArray',
+ ]
+ raise ConversionError(
+ f'Unsupported data type: {type(data).__name__}. Supported types: {", ".join(supported_types)}'
)
- for name, ts in self.time_series_data.items():
- if name in data.columns:
- if not ts.needs_extra_timestep:
- # For time series without extra timestep
- if include_extra_timestep:
- # If data includes extra timestep but series doesn't need it, exclude the last point
- ts.stored_data = data[name].iloc[:-1]
- else:
- # Use data as is
- ts.stored_data = data[name]
- else:
- # For time series with extra timestep
- if include_extra_timestep:
- # Data already includes extra timestep
- ts.stored_data = data[name]
- else:
- # Need to add extra timestep - extrapolate from the last value
- extra_step_value = data[name].iloc[-1]
- extra_step_index = pd.DatetimeIndex([self.timesteps_extra[-1]], name='time')
- extra_step_series = pd.Series([extra_step_value], index=extra_step_index)
-
- # Combine the regular data with the extra timestep
- ts.stored_data = pd.concat([data[name], extra_step_series])
-
- logger.debug(f'Updated data for {name}')
-
- def to_dataframe(
- self, filtered: Literal['all', 'constant', 'non_constant'] = 'non_constant', include_extra_timestep: bool = True
- ) -> pd.DataFrame:
- """
- Convert collection to DataFrame with optional filtering and timestep control.
-
- Args:
- filtered: Filter time series by variability, by default 'non_constant'
- include_extra_timestep: Whether to include the extra timestep in the result, by default True
+ # Broadcast intermediate result to target specification
+ return cls._broadcast_dataarray_to_target_specification(intermediate, validated_coords, target_dims)
- Returns:
- DataFrame representation of the collection
+ @staticmethod
+ def _validate_and_prepare_target_coordinates(
+ coords: dict[str, pd.Index],
+ ) -> tuple[dict[str, pd.Index], tuple[str, ...]]:
"""
- include_constants = filtered != 'non_constant'
- ds = self.to_dataset(include_constants=include_constants)
-
- if not include_extra_timestep:
- ds = ds.isel(time=slice(None, -1))
-
- df = ds.to_dataframe()
+ Validate and prepare target coordinate specification for DataArray creation.
- # Apply filtering
- if filtered == 'all':
- return df
- elif filtered == 'constant':
- return df.loc[:, df.nunique() == 1]
- elif filtered == 'non_constant':
- return df.loc[:, df.nunique() > 1]
- else:
- raise ValueError("filtered must be one of: 'all', 'constant', 'non_constant'")
-
- def to_dataset(self, include_constants: bool = True) -> xr.Dataset:
- """
- Combine all time series into a single Dataset with all timesteps.
+ Performs comprehensive validation of coordinate inputs and prepares them
+ for use in DataArray construction with appropriate naming and type checking.
Args:
- include_constants: Whether to include time series with constant values, by default True
+ coords: Raw coordinate specification {dimension_name: coordinate_index}
Returns:
- Dataset containing all selected time series with all timesteps
- """
- # Determine which series to include
- if include_constants:
- series_to_include = self.time_series_data.values()
- else:
- series_to_include = self.non_constants
-
- # Create individual datasets and merge them
- ds = xr.merge([ts.active_data.to_dataset(name=ts.name) for ts in series_to_include])
-
- # Ensure the correct time coordinates
- ds = ds.reindex(time=self.timesteps_extra)
-
- ds.attrs.update(
- {
- 'timesteps_extra': f'{self.timesteps_extra[0]} ... {self.timesteps_extra[-1]} | len={len(self.timesteps_extra)}',
- 'hours_per_timestep': self._format_stats(self.hours_per_timestep),
- }
- )
-
- return ds
-
- def _update_time_series_timesteps(self):
- """Update active timesteps for all time series."""
- for ts in self.time_series_data.values():
- if ts.needs_extra_timestep:
- ts.active_timesteps = self.timesteps_extra
- else:
- ts.active_timesteps = self.timesteps
+ Tuple of (validated_coordinates_dict, dimension_names_tuple)
- @staticmethod
- def _validate_timesteps(timesteps: pd.DatetimeIndex):
- """Validate timesteps format and rename if needed."""
- if not isinstance(timesteps, pd.DatetimeIndex):
- raise TypeError('timesteps must be a pandas DatetimeIndex')
-
- if len(timesteps) < 2:
- raise ValueError('timesteps must contain at least 2 timestamps')
-
- # Ensure timesteps has the required name
- if timesteps.name != 'time':
- logger.warning('Renamed timesteps to "time" (was "%s")', timesteps.name)
- timesteps.name = 'time'
-
- @staticmethod
- def _create_timesteps_with_extra(
- timesteps: pd.DatetimeIndex, hours_of_last_timestep: float | None
- ) -> pd.DatetimeIndex:
- """Create timesteps with an extra step at the end."""
- if hours_of_last_timestep is not None:
- # Create the extra timestep using the specified duration
- last_date = pd.DatetimeIndex([timesteps[-1] + pd.Timedelta(hours=hours_of_last_timestep)], name='time')
- else:
- # Use the last interval as the extra timestep duration
- last_date = pd.DatetimeIndex([timesteps[-1] + (timesteps[-1] - timesteps[-2])], name='time')
-
- # Combine with original timesteps
- return pd.DatetimeIndex(timesteps.append(last_date), name='time')
-
- @staticmethod
- def _calculate_hours_of_previous_timesteps(
- timesteps: pd.DatetimeIndex, hours_of_previous_timesteps: float | np.ndarray | None
- ) -> float | np.ndarray:
- """Calculate duration of regular timesteps."""
- if hours_of_previous_timesteps is not None:
- return hours_of_previous_timesteps
-
- # Calculate from the first interval
- first_interval = timesteps[1] - timesteps[0]
- return first_interval.total_seconds() / 3600 # Convert to hours
+ Raises:
+ ConversionError: If any coordinates are invalid, improperly typed,
+ or have inconsistent naming
+ """
+ validated_coords = {}
+ dimension_names = []
- @staticmethod
- def calculate_hours_per_timestep(timesteps_extra: pd.DatetimeIndex) -> xr.DataArray:
- """Calculate duration of each timestep."""
- # Calculate differences between consecutive timestamps
- hours_per_step = np.diff(timesteps_extra) / pd.Timedelta(hours=1)
+ for dim_name, coord_index in coords.items():
+ # Type validation
+ if not isinstance(coord_index, pd.Index):
+ raise ConversionError(
+ f'Coordinate for dimension "{dim_name}" must be a pandas.Index, got {type(coord_index).__name__}'
+ )
- return xr.DataArray(
- data=hours_per_step, coords={'time': timesteps_extra[:-1]}, dims=('time',), name='hours_per_step'
- )
+ # Non-empty validation
+ if len(coord_index) == 0:
+ raise ConversionError(f'Coordinate for dimension "{dim_name}" cannot be empty')
- def _calculate_group_weights(self) -> dict[str, float]:
- """Calculate weights for aggregation groups."""
- # Count series in each group
- groups = [ts.aggregation_group for ts in self.time_series_data.values() if ts.aggregation_group is not None]
- group_counts = Counter(groups)
-
- # Calculate weight for each group (1/count)
- return {group: 1 / count for group, count in group_counts.items()}
-
- def _calculate_weights(self) -> dict[str, float]:
- """Calculate weights for all time series."""
- # Calculate weight for each time series
- weights = {}
- for name, ts in self.time_series_data.items():
- if ts.aggregation_group is not None:
- # Use group weight
- weights[name] = self.group_weights.get(ts.aggregation_group, 1)
- else:
- # Use individual weight or default to 1
- weights[name] = ts.aggregation_weight or 1
+ # Ensure coordinate index has consistent naming
+ if coord_index.name != dim_name:
+ coord_index = coord_index.rename(dim_name)
- return weights
+ # Special validation for time dimensions (common pattern)
+ if dim_name == 'time' and not isinstance(coord_index, pd.DatetimeIndex):
+ raise ConversionError(
+ f'Dimension named "time" should use DatetimeIndex for proper '
+ f'time-series functionality, got {type(coord_index).__name__}'
+ )
- def _format_stats(self, data) -> str:
- """Format statistics for a data array."""
- if hasattr(data, 'values'):
- values = data.values
- else:
- values = np.asarray(data)
+ validated_coords[dim_name] = coord_index
+ dimension_names.append(dim_name)
- mean_val = np.mean(values)
- min_val = np.min(values)
- max_val = np.max(values)
+ return validated_coords, tuple(dimension_names)
- return f'mean: {mean_val:.2f}, min: {min_val:.2f}, max: {max_val:.2f}'
- def __getitem__(self, name: str) -> TimeSeries:
- """Get a TimeSeries by name."""
+def get_dataarray_stats(arr: xr.DataArray) -> dict:
+ """Generate statistical summary of a DataArray."""
+ stats = {}
+ if arr.dtype.kind in 'biufc': # bool, int, uint, float, complex
try:
- return self.time_series_data[name]
- except KeyError as e:
- raise KeyError(f'TimeSeries "{name}" not found in the TimeSeriesCollection') from e
-
- def __iter__(self) -> Iterator[TimeSeries]:
- """Iterate through all TimeSeries in the collection."""
- return iter(self.time_series_data.values())
-
- def __len__(self) -> int:
- """Get the number of TimeSeries in the collection."""
- return len(self.time_series_data)
-
- def __contains__(self, item: str | TimeSeries) -> bool:
- """Check if a TimeSeries exists in the collection."""
- if isinstance(item, str):
- return item in self.time_series_data
- elif isinstance(item, TimeSeries):
- return any([item is ts for ts in self.time_series_data.values()])
- return False
+ stats.update(
+ {
+ 'min': float(arr.min().values),
+ 'max': float(arr.max().values),
+ 'mean': float(arr.mean().values),
+ 'median': float(arr.median().values),
+ 'std': float(arr.std().values),
+ 'count': int(arr.count().values), # non-null count
+ }
+ )
- @property
- def non_constants(self) -> list[TimeSeries]:
- """Get time series with varying values."""
- return [ts for ts in self.time_series_data.values() if not ts.all_equal]
+ # Add null count only if there are nulls
+ null_count = int(arr.isnull().sum().values)
+ if null_count > 0:
+ stats['nulls'] = null_count
- @property
- def constants(self) -> list[TimeSeries]:
- """Get time series with constant values."""
- return [ts for ts in self.time_series_data.values() if ts.all_equal]
+ except Exception:
+ pass
- @property
- def timesteps(self) -> pd.DatetimeIndex:
- """Get the active timesteps."""
- return self.all_timesteps if self._active_timesteps is None else self._active_timesteps
+ return stats
- @property
- def timesteps_extra(self) -> pd.DatetimeIndex:
- """Get the active timesteps with extra step."""
- return self.all_timesteps_extra if self._active_timesteps_extra is None else self._active_timesteps_extra
- @property
- def hours_per_timestep(self) -> xr.DataArray:
- """Get the duration of each active timestep."""
- return (
- self.all_hours_per_timestep if self._active_hours_per_timestep is None else self._active_hours_per_timestep
- )
+def drop_constant_arrays(ds: xr.Dataset, dim: str = 'time', drop_arrays_without_dim: bool = True) -> xr.Dataset:
+ """Drop variables with constant values along a dimension.
- @property
- def hours_of_last_timestep(self) -> float:
- """Get the duration of the last timestep."""
- return float(self.hours_per_timestep[-1].item())
-
- def __repr__(self):
- return f'TimeSeriesCollection:\n{self.to_dataset()}'
-
- def __str__(self):
- longest_name = max([time_series.name for time_series in self.time_series_data], key=len)
+ Args:
+ ds: Input dataset to filter.
+ dim: Dimension along which to check for constant values.
+ drop_arrays_without_dim: If True, also drop variables that don't have the specified dimension.
- stats_summary = '\n'.join(
- [
- f' - {time_series.name:<{len(longest_name)}}: {get_numeric_stats(time_series.active_data)}'
- for time_series in self.time_series_data
- ]
+ Returns:
+ Dataset with constant variables removed.
+ """
+ drop_vars = []
+
+ for name, da in ds.data_vars.items():
+ # Skip variables without the dimension
+ if dim not in da.dims:
+ if drop_arrays_without_dim:
+ drop_vars.append(name)
+ continue
+
+ # Check if variable is constant along the dimension
+ if (da.max(dim, skipna=True) == da.min(dim, skipna=True)).all().item():
+ drop_vars.append(name)
+
+ if drop_vars:
+ drop_vars = sorted(drop_vars)
+ logger.debug(
+ f'Dropping {len(drop_vars)} constant/dimension-less arrays: {drop_vars[:5]}{"..." if len(drop_vars) > 5 else ""}'
)
- return (
- f'TimeSeriesCollection with {len(self.time_series_data)} series\n'
- f' Time Range: {self.timesteps[0]} → {self.timesteps[-1]}\n'
- f' No. of timesteps: {len(self.timesteps)} + 1 extra\n'
- f' Hours per timestep: {get_numeric_stats(self.hours_per_timestep)}\n'
- f' Time Series Data:\n'
- f'{stats_summary}'
- )
+ return ds.drop_vars(drop_vars)
-def get_numeric_stats(data: xr.DataArray, decimals: int = 2, padd: int = 10) -> str:
- """Calculates the mean, median, min, max, and standard deviation of a numeric DataArray."""
- format_spec = f'>{padd}.{decimals}f' if padd else f'.{decimals}f'
- if np.unique(data).size == 1:
- return f'{data.max().item():{format_spec}} (constant)'
- mean = data.mean().item()
- median = data.median().item()
- min_val = data.min().item()
- max_val = data.max().item()
- std = data.std().item()
- return f'{mean:{format_spec}} (mean), {median:{format_spec}} (median), {min_val:{format_spec}} (min), {max_val:{format_spec}} (max), {std:{format_spec}} (std)'
+# Backward compatibility aliases
+# TODO: Needed?
+NonTemporalDataUser = PeriodicDataUser
+NonTemporalData = PeriodicData
diff --git a/flixopt/effects.py b/flixopt/effects.py
index 31c941e11..2c7607b02 100644
--- a/flixopt/effects.py
+++ b/flixopt/effects.py
@@ -9,14 +9,16 @@
import logging
import warnings
+from collections import deque
from typing import TYPE_CHECKING, Literal
import linopy
import numpy as np
+import xarray as xr
-from .core import NumericDataTS, Scalar, TimeSeries
+from .core import PeriodicDataUser, Scalar, TemporalData, TemporalDataUser
from .features import ShareAllocationModel
-from .structure import Element, ElementModel, Model, SystemModel, register_class_for_io
+from .structure import Element, ElementModel, FlowSystemModel, Submodel, register_class_for_io
if TYPE_CHECKING:
from collections.abc import Iterator
@@ -48,36 +50,48 @@ class Effect(Element):
without effect dictionaries. Used for simplified effect specification (and less boilerplate code).
is_objective: If True, this effect serves as the optimization objective function.
Only one effect can be marked as objective per optimization.
- specific_share_to_other_effects_operation: Operational cross-effect contributions.
- Maps this effect's operational values to contributions to other effects
- specific_share_to_other_effects_invest: Investment cross-effect contributions.
- Maps this effect's investment values to contributions to other effects.
- minimum_operation: Minimum allowed total operational contribution across all timesteps.
- maximum_operation: Maximum allowed total operational contribution across all timesteps.
- minimum_operation_per_hour: Minimum allowed operational contribution per timestep.
- maximum_operation_per_hour: Maximum allowed operational contribution per timestep.
- minimum_invest: Minimum allowed total investment contribution.
- maximum_invest: Maximum allowed total investment contribution.
- minimum_total: Minimum allowed total effect (operation + investment combined).
- maximum_total: Maximum allowed total effect (operation + investment combined).
+ share_from_temporal: Temporal cross-effect contributions.
+ Maps temporal contributions from other effects to this effect
+ share_from_periodic: Periodic cross-effect contributions.
+ Maps periodic contributions from other effects to this effect.
+ minimum_temporal: Minimum allowed total contribution across all timesteps.
+ maximum_temporal: Maximum allowed total contribution across all timesteps.
+ minimum_per_hour: Minimum allowed contribution per hour.
+ maximum_per_hour: Maximum allowed contribution per hour.
+ minimum_periodic: Minimum allowed total periodic contribution.
+ maximum_periodic: Maximum allowed total periodic contribution.
+ minimum_total: Minimum allowed total effect (temporal + periodic combined).
+ maximum_total: Maximum allowed total effect (temporal + periodic combined).
meta_data: Used to store additional information. Not used internally but saved
in results. Only use Python native types.
+ **Deprecated Parameters** (for backwards compatibility):
+ minimum_operation: Use `minimum_temporal` instead.
+ maximum_operation: Use `maximum_temporal` instead.
+ minimum_invest: Use `minimum_periodic` instead.
+ maximum_invest: Use `maximum_periodic` instead.
+ minimum_operation_per_hour: Use `minimum_per_hour` instead.
+ maximum_operation_per_hour: Use `maximum_per_hour` instead.
+
Examples:
Basic cost objective:
```python
- cost_effect = Effect(label='system_costs', unit='€', description='Total system costs', is_objective=True)
+ cost_effect = Effect(
+ label='system_costs',
+ unit='€',
+ description='Total system costs',
+ is_objective=True,
+ )
```
- CO2 emissions with carbon pricing:
+ CO2 emissions:
```python
co2_effect = Effect(
- label='co2_emissions',
+ label='CO2',
unit='kg_CO2',
description='Carbon dioxide emissions',
- specific_share_to_other_effects_operation={'costs': 50}, # €50/t_CO2
maximum_total=1_000_000, # 1000 t CO2 annual limit
)
```
@@ -100,7 +114,21 @@ class Effect(Element):
label='primary_energy',
unit='kWh_primary',
description='Primary energy consumption',
- specific_share_to_other_effects_operation={'costs': 0.08}, # €0.08/kWh
+ )
+ ```
+
+ Cost objective with carbon and primary energy pricing:
+
+ ```python
+ cost_effect = Effect(
+ label='system_costs',
+ unit='€',
+ description='Total system costs',
+ is_objective=True,
+ share_from_temporal={
+ 'primary_energy': 0.08, # 0.08 €/kWh_primary
+ 'CO2': 0.2, # Carbon pricing: 0.2 €/kg_CO2 into costs if used on a cost effect
+ },
)
```
@@ -111,8 +139,8 @@ class Effect(Element):
label='water_consumption',
unit='m³',
description='Industrial water usage',
- minimum_operation_per_hour=10, # Minimum 10 m³/h for process stability
- maximum_operation_per_hour=500, # Maximum 500 m³/h capacity limit
+ minimum_per_hour=10, # Minimum 10 m³/h for process stability
+ maximum_per_hour=500, # Maximum 500 m³/h capacity limit
maximum_total=100_000, # Annual permit limit: 100,000 m³
)
```
@@ -127,8 +155,7 @@ class Effect(Element):
across all contributions to each effect manually.
Effects are accumulated as:
- - Total = Σ(operational contributions) + Σ(investment contributions)
- - Cross-effects add to target effects based on specific_share ratios
+ - Total = Σ(temporal contributions) + Σ(periodic contributions)
"""
@@ -140,53 +167,220 @@ def __init__(
meta_data: dict | None = None,
is_standard: bool = False,
is_objective: bool = False,
- specific_share_to_other_effects_operation: EffectValuesUser | None = None,
- specific_share_to_other_effects_invest: EffectValuesUser | None = None,
- minimum_operation: Scalar | None = None,
- maximum_operation: Scalar | None = None,
- minimum_invest: Scalar | None = None,
- maximum_invest: Scalar | None = None,
- minimum_operation_per_hour: NumericDataTS | None = None,
- maximum_operation_per_hour: NumericDataTS | None = None,
+ share_from_temporal: TemporalEffectsUser | None = None,
+ share_from_periodic: PeriodicEffectsUser | None = None,
+ minimum_temporal: PeriodicEffectsUser | None = None,
+ maximum_temporal: PeriodicEffectsUser | None = None,
+ minimum_periodic: PeriodicEffectsUser | None = None,
+ maximum_periodic: PeriodicEffectsUser | None = None,
+ minimum_per_hour: TemporalDataUser | None = None,
+ maximum_per_hour: TemporalDataUser | None = None,
minimum_total: Scalar | None = None,
maximum_total: Scalar | None = None,
+ **kwargs,
):
super().__init__(label, meta_data=meta_data)
- self.label = label
self.unit = unit
self.description = description
self.is_standard = is_standard
self.is_objective = is_objective
- self.specific_share_to_other_effects_operation: EffectValuesUser = (
- specific_share_to_other_effects_operation or {}
- )
- self.specific_share_to_other_effects_invest: EffectValuesUser = specific_share_to_other_effects_invest or {}
- self.minimum_operation = minimum_operation
- self.maximum_operation = maximum_operation
- self.minimum_operation_per_hour = minimum_operation_per_hour
- self.maximum_operation_per_hour = maximum_operation_per_hour
- self.minimum_invest = minimum_invest
- self.maximum_invest = maximum_invest
+ self.share_from_temporal: TemporalEffectsUser = share_from_temporal if share_from_temporal is not None else {}
+ self.share_from_periodic: PeriodicEffectsUser = share_from_periodic if share_from_periodic is not None else {}
+
+ # Handle backwards compatibility for deprecated parameters using centralized helper
+ minimum_temporal = self._handle_deprecated_kwarg(
+ kwargs, 'minimum_operation', 'minimum_temporal', minimum_temporal
+ )
+ maximum_temporal = self._handle_deprecated_kwarg(
+ kwargs, 'maximum_operation', 'maximum_temporal', maximum_temporal
+ )
+ minimum_periodic = self._handle_deprecated_kwarg(kwargs, 'minimum_invest', 'minimum_periodic', minimum_periodic)
+ maximum_periodic = self._handle_deprecated_kwarg(kwargs, 'maximum_invest', 'maximum_periodic', maximum_periodic)
+ minimum_per_hour = self._handle_deprecated_kwarg(
+ kwargs, 'minimum_operation_per_hour', 'minimum_per_hour', minimum_per_hour
+ )
+ maximum_per_hour = self._handle_deprecated_kwarg(
+ kwargs, 'maximum_operation_per_hour', 'maximum_per_hour', maximum_per_hour
+ )
+
+ # Validate any remaining unexpected kwargs
+ self._validate_kwargs(kwargs)
+
+ # Set attributes directly
+ self.minimum_temporal = minimum_temporal
+ self.maximum_temporal = maximum_temporal
+ self.minimum_periodic = minimum_periodic
+ self.maximum_periodic = maximum_periodic
+ self.minimum_per_hour = minimum_per_hour
+ self.maximum_per_hour = maximum_per_hour
self.minimum_total = minimum_total
self.maximum_total = maximum_total
- def transform_data(self, flow_system: FlowSystem):
- self.minimum_operation_per_hour = flow_system.create_time_series(
- f'{self.label_full}|minimum_operation_per_hour', self.minimum_operation_per_hour
+ # Backwards compatible properties (deprecated)
+ @property
+ def minimum_operation(self):
+ """DEPRECATED: Use 'minimum_temporal' property instead."""
+ warnings.warn(
+ "Property 'minimum_operation' is deprecated. Use 'minimum_temporal' instead.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+ return self.minimum_temporal
+
+ @minimum_operation.setter
+ def minimum_operation(self, value):
+ """DEPRECATED: Use 'minimum_temporal' property instead."""
+ warnings.warn(
+ "Property 'minimum_operation' is deprecated. Use 'minimum_temporal' instead.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+ self.minimum_temporal = value
+
+ @property
+ def maximum_operation(self):
+ """DEPRECATED: Use 'maximum_temporal' property instead."""
+ warnings.warn(
+ "Property 'maximum_operation' is deprecated. Use 'maximum_temporal' instead.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+ return self.maximum_temporal
+
+ @maximum_operation.setter
+ def maximum_operation(self, value):
+ """DEPRECATED: Use 'maximum_temporal' property instead."""
+ warnings.warn(
+ "Property 'maximum_operation' is deprecated. Use 'maximum_temporal' instead.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+ self.maximum_temporal = value
+
+ @property
+ def minimum_invest(self):
+ """DEPRECATED: Use 'minimum_periodic' property instead."""
+ warnings.warn(
+ "Property 'minimum_invest' is deprecated. Use 'minimum_periodic' instead.",
+ DeprecationWarning,
+ stacklevel=2,
)
- self.maximum_operation_per_hour = flow_system.create_time_series(
- f'{self.label_full}|maximum_operation_per_hour',
- self.maximum_operation_per_hour,
+ return self.minimum_periodic
+
+ @minimum_invest.setter
+ def minimum_invest(self, value):
+ """DEPRECATED: Use 'minimum_periodic' property instead."""
+ warnings.warn(
+ "Property 'minimum_invest' is deprecated. Use 'minimum_periodic' instead.",
+ DeprecationWarning,
+ stacklevel=2,
)
+ self.minimum_periodic = value
- self.specific_share_to_other_effects_operation = flow_system.create_effect_time_series(
- f'{self.label_full}|operation->', self.specific_share_to_other_effects_operation, 'operation'
+ @property
+ def maximum_invest(self):
+ """DEPRECATED: Use 'maximum_periodic' property instead."""
+ warnings.warn(
+ "Property 'maximum_invest' is deprecated. Use 'maximum_periodic' instead.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+ return self.maximum_periodic
+
+ @maximum_invest.setter
+ def maximum_invest(self, value):
+ """DEPRECATED: Use 'maximum_periodic' property instead."""
+ warnings.warn(
+ "Property 'maximum_invest' is deprecated. Use 'maximum_periodic' instead.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+ self.maximum_periodic = value
+
+ @property
+ def minimum_operation_per_hour(self):
+ """DEPRECATED: Use 'minimum_per_hour' property instead."""
+ warnings.warn(
+ "Property 'minimum_operation_per_hour' is deprecated. Use 'minimum_per_hour' instead.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+ return self.minimum_per_hour
+
+ @minimum_operation_per_hour.setter
+ def minimum_operation_per_hour(self, value):
+ """DEPRECATED: Use 'minimum_per_hour' property instead."""
+ warnings.warn(
+ "Property 'minimum_operation_per_hour' is deprecated. Use 'minimum_per_hour' instead.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+ self.minimum_per_hour = value
+
+ @property
+ def maximum_operation_per_hour(self):
+ """DEPRECATED: Use 'maximum_per_hour' property instead."""
+ warnings.warn(
+ "Property 'maximum_operation_per_hour' is deprecated. Use 'maximum_per_hour' instead.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+ return self.maximum_per_hour
+
+ @maximum_operation_per_hour.setter
+ def maximum_operation_per_hour(self, value):
+ """DEPRECATED: Use 'maximum_per_hour' property instead."""
+ warnings.warn(
+ "Property 'maximum_operation_per_hour' is deprecated. Use 'maximum_per_hour' instead.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+ self.maximum_per_hour = value
+
+ def transform_data(self, flow_system: FlowSystem, name_prefix: str = '') -> None:
+ prefix = '|'.join(filter(None, [name_prefix, self.label_full]))
+ self.minimum_per_hour = flow_system.fit_to_model_coords(f'{prefix}|minimum_per_hour', self.minimum_per_hour)
+
+ self.maximum_per_hour = flow_system.fit_to_model_coords(f'{prefix}|maximum_per_hour', self.maximum_per_hour)
+
+ self.share_from_temporal = flow_system.fit_effects_to_model_coords(
+ label_prefix=None,
+ effect_values=self.share_from_temporal,
+ label_suffix=f'(temporal)->{prefix}(temporal)',
+ dims=['time', 'period', 'scenario'],
+ )
+ self.share_from_periodic = flow_system.fit_effects_to_model_coords(
+ label_prefix=None,
+ effect_values=self.share_from_periodic,
+ label_suffix=f'(periodic)->{prefix}(periodic)',
+ dims=['period', 'scenario'],
+ )
+
+ self.minimum_temporal = flow_system.fit_to_model_coords(
+ f'{prefix}|minimum_temporal', self.minimum_temporal, dims=['period', 'scenario']
+ )
+ self.maximum_temporal = flow_system.fit_to_model_coords(
+ f'{prefix}|maximum_temporal', self.maximum_temporal, dims=['period', 'scenario']
+ )
+ self.minimum_periodic = flow_system.fit_to_model_coords(
+ f'{prefix}|minimum_periodic', self.minimum_periodic, dims=['period', 'scenario']
+ )
+ self.maximum_periodic = flow_system.fit_to_model_coords(
+ f'{prefix}|maximum_periodic', self.maximum_periodic, dims=['period', 'scenario']
+ )
+ self.minimum_total = flow_system.fit_to_model_coords(
+ f'{prefix}|minimum_total',
+ self.minimum_total,
+ dims=['period', 'scenario'],
+ )
+ self.maximum_total = flow_system.fit_to_model_coords(
+ f'{prefix}|maximum_total', self.maximum_total, dims=['period', 'scenario']
)
- def create_model(self, model: SystemModel) -> EffectModel:
+ def create_model(self, model: FlowSystemModel) -> EffectModel:
self._plausibility_checks()
- self.model = EffectModel(model, self)
- return self.model
+ self.submodel = EffectModel(model, self)
+ return self.submodel
def _plausibility_checks(self) -> None:
# TODO: Check for plausibility
@@ -194,70 +388,64 @@ def _plausibility_checks(self) -> None:
class EffectModel(ElementModel):
- def __init__(self, model: SystemModel, element: Effect):
+ element: Effect # Type hint
+
+ def __init__(self, model: FlowSystemModel, element: Effect):
super().__init__(model, element)
- self.element: Effect = element
+
+ def _do_modeling(self):
self.total: linopy.Variable | None = None
- self.invest: ShareAllocationModel = self.add(
+ self.periodic: ShareAllocationModel = self.add_submodels(
ShareAllocationModel(
- self._model,
- False,
- self.label_of_element,
- 'invest',
- label_full=f'{self.label_full}(invest)',
- total_max=self.element.maximum_invest,
- total_min=self.element.minimum_invest,
- )
+ model=self._model,
+ dims=('period', 'scenario'),
+ label_of_element=self.label_of_element,
+ label_of_model=f'{self.label_of_model}(periodic)',
+ total_max=self.element.maximum_periodic,
+ total_min=self.element.minimum_periodic,
+ ),
+ short_name='periodic',
)
- self.operation: ShareAllocationModel = self.add(
+ self.temporal: ShareAllocationModel = self.add_submodels(
ShareAllocationModel(
- self._model,
- True,
- self.label_of_element,
- 'operation',
- label_full=f'{self.label_full}(operation)',
- total_max=self.element.maximum_operation,
- total_min=self.element.minimum_operation,
- min_per_hour=self.element.minimum_operation_per_hour.active_data
- if self.element.minimum_operation_per_hour is not None
- else None,
- max_per_hour=self.element.maximum_operation_per_hour.active_data
- if self.element.maximum_operation_per_hour is not None
- else None,
- )
+ model=self._model,
+ dims=('time', 'period', 'scenario'),
+ label_of_element=self.label_of_element,
+ label_of_model=f'{self.label_of_model}(temporal)',
+ total_max=self.element.maximum_temporal,
+ total_min=self.element.minimum_temporal,
+ min_per_hour=self.element.minimum_per_hour if self.element.minimum_per_hour is not None else None,
+ max_per_hour=self.element.maximum_per_hour if self.element.maximum_per_hour is not None else None,
+ ),
+ short_name='temporal',
)
- def do_modeling(self):
- for model in self.sub_models:
- model.do_modeling()
-
- self.total = self.add(
- self._model.add_variables(
- lower=self.element.minimum_total if self.element.minimum_total is not None else -np.inf,
- upper=self.element.maximum_total if self.element.maximum_total is not None else np.inf,
- coords=None,
- name=f'{self.label_full}|total',
- ),
- 'total',
+ self.total = self.add_variables(
+ lower=self.element.minimum_total if self.element.minimum_total is not None else -np.inf,
+ upper=self.element.maximum_total if self.element.maximum_total is not None else np.inf,
+ coords=self._model.get_coords(['period', 'scenario']),
+ name=self.label_full,
)
- self.add(
- self._model.add_constraints(
- self.total == self.operation.total.sum() + self.invest.total.sum(), name=f'{self.label_full}|total'
- ),
- 'total',
+ self.add_constraints(
+ self.total == self.temporal.total + self.periodic.total, name=self.label_full, short_name='total'
)
-EffectValuesExpr = dict[str, linopy.LinearExpression] # Used to create Shares
-EffectTimeSeries = dict[str, TimeSeries] # Used internally to index values
-EffectValuesDict = dict[str, NumericDataTS] # How effect values are stored
-EffectValuesUser = NumericDataTS | dict[str, NumericDataTS] # User-specified Shares to Effects
-""" This datatype is used to define the share to an effect by a certain attribute. """
+TemporalEffectsUser = TemporalDataUser | dict[str, TemporalDataUser] # User-specified Shares to Effects
+""" This datatype is used to define a temporal share to an effect by a certain attribute. """
+
+PeriodicEffectsUser = PeriodicDataUser | dict[str, PeriodicDataUser] # User-specified Shares to Effects
+""" This datatype is used to define a scalar share to an effect by a certain attribute. """
+
+TemporalEffects = dict[str, TemporalData] # User-specified Shares to Effects
+""" This datatype is used internally to handle temporal shares to an effect. """
+
+PeriodicEffects = dict[str, Scalar]
+""" This datatype is used internally to handle scalar shares to an effect. """
-EffectValuesUserScalar = Scalar | dict[str, Scalar] # User-specified Shares to Effects
-""" This datatype is used to define the share to an effect by a certain attribute. Only scalars are allowed. """
+EffectExpr = dict[str, linopy.LinearExpression] # Used to create Shares
class EffectCollection:
@@ -270,13 +458,13 @@ def __init__(self, *effects: Effect):
self._standard_effect: Effect | None = None
self._objective_effect: Effect | None = None
- self.model: EffectCollectionModel | None = None
+ self.submodel: EffectCollectionModel | None = None
self.add_effects(*effects)
- def create_model(self, model: SystemModel) -> EffectCollectionModel:
+ def create_model(self, model: FlowSystemModel) -> EffectCollectionModel:
self._plausibility_checks()
- self.model = EffectCollectionModel(model, self)
- return self.model
+ self.submodel = EffectCollectionModel(model, self)
+ return self.submodel
def add_effects(self, *effects: Effect) -> None:
for effect in list(effects):
@@ -289,20 +477,24 @@ def add_effects(self, *effects: Effect) -> None:
self._effects[effect.label] = effect
logger.info(f'Registered new Effect: {effect.label}')
- def create_effect_values_dict(self, effect_values_user: EffectValuesUser) -> EffectValuesDict | None:
+ def create_effect_values_dict(
+ self, effect_values_user: PeriodicEffectsUser | TemporalEffectsUser
+ ) -> dict[str, Scalar | TemporalDataUser] | None:
"""
Converts effect values into a dictionary. If a scalar is provided, it is associated with a default effect type.
Examples
--------
- effect_values_user = 20 -> {None: 20}
- effect_values_user = None -> None
- effect_values_user = {effect1: 20, effect2: 0.3} -> {effect1: 20, effect2: 0.3}
+ effect_values_user = 20 -> {'': 20}
+ effect_values_user = {None: 20} -> {'': 20}
+ effect_values_user = None -> None
+ effect_values_user = {'effect1': 20, 'effect2': 0.3} -> {'effect1': 20, 'effect2': 0.3}
Returns
-------
dict or None
- A dictionary with None or Effect as the key, or None if input is None.
+ A dictionary keyed by effect label, or None if input is None.
+ Note: a standard effect must be defined when passing scalars or None labels.
"""
def get_effect_label(eff: Effect | str) -> str:
@@ -315,6 +507,8 @@ def get_effect_label(eff: Effect | str) -> str:
stacklevel=2,
)
return eff.label
+ elif eff is None:
+ return self.standard_effect.label
else:
return eff
@@ -326,26 +520,23 @@ def get_effect_label(eff: Effect | str) -> str:
def _plausibility_checks(self) -> None:
# Check circular loops in effects:
- # TODO: Improve checks!! Only most basic case covered...
+ temporal, periodic = self.calculate_effect_share_factors()
- def error_str(effect_label: str, share_ffect_label: str):
- return (
- f' {effect_label} -> has share in: {share_ffect_label}\n'
- f' {share_ffect_label} -> has share in: {effect_label}'
- )
+ # Validate all referenced sources exist
+ unknown = {src for src, _ in list(temporal.keys()) + list(periodic.keys()) if src not in self.effects}
+ if unknown:
+ raise KeyError(f'Unknown effects used in in effect share mappings: {sorted(unknown)}')
- for effect in self.effects.values():
- # Effekt darf nicht selber als Share in seinen ShareEffekten auftauchen:
- # operation:
- for target_effect in effect.specific_share_to_other_effects_operation.keys():
- assert effect not in self[target_effect].specific_share_to_other_effects_operation.keys(), (
- f'Error: circular operation-shares \n{error_str(effect.label, self[target_effect].label)}'
- )
- # invest:
- for target_effect in effect.specific_share_to_other_effects_invest.keys():
- assert effect not in self[target_effect].specific_share_to_other_effects_invest.keys(), (
- f'Error: circular invest-shares \n{error_str(effect.label, self[target_effect].label)}'
- )
+ temporal_cycles = detect_cycles(tuples_to_adjacency_list([key for key in temporal]))
+ periodic_cycles = detect_cycles(tuples_to_adjacency_list([key for key in periodic]))
+
+ if temporal_cycles:
+ cycle_str = '\n'.join([' -> '.join(cycle) for cycle in temporal_cycles])
+ raise ValueError(f'Error: circular temporal-shares detected:\n{cycle_str}')
+
+ if periodic_cycles:
+ cycle_str = '\n'.join([' -> '.join(cycle) for cycle in periodic_cycles])
+ raise ValueError(f'Error: circular periodic-shares detected:\n{cycle_str}')
def __getitem__(self, effect: str | Effect | None) -> Effect:
"""
@@ -378,7 +569,10 @@ def __contains__(self, item: str | Effect) -> bool:
if isinstance(item, str):
return item in self.effects # Check if the label exists
elif isinstance(item, Effect):
- return item in self.effects.values() # Check if the object exists
+ if item.label_full in self.effects:
+ return True
+ if item in self.effects.values(): # Check if the object exists
+ return True
return False
@property
@@ -388,7 +582,10 @@ def effects(self) -> dict[str, Effect]:
@property
def standard_effect(self) -> Effect:
if self._standard_effect is None:
- raise KeyError('No standard-effect specified!')
+ raise KeyError(
+ 'No standard-effect specified! Either set an effect through is_standard=True '
+ 'or pass a mapping when specifying effect values: {effect_label: value}.'
+ )
return self._standard_effect
@standard_effect.setter
@@ -409,60 +606,248 @@ def objective_effect(self, value: Effect) -> None:
raise ValueError(f'An objective-effect already exists! ({self._objective_effect.label=})')
self._objective_effect = value
-
-class EffectCollectionModel(Model):
+ def calculate_effect_share_factors(
+ self,
+ ) -> tuple[
+ dict[tuple[str, str], xr.DataArray],
+ dict[tuple[str, str], xr.DataArray],
+ ]:
+ shares_periodic = {}
+ for name, effect in self.effects.items():
+ if effect.share_from_periodic:
+ for source, data in effect.share_from_periodic.items():
+ if source not in shares_periodic:
+ shares_periodic[source] = {}
+ shares_periodic[source][name] = data
+ shares_periodic = calculate_all_conversion_paths(shares_periodic)
+
+ shares_temporal = {}
+ for name, effect in self.effects.items():
+ if effect.share_from_temporal:
+ for source, data in effect.share_from_temporal.items():
+ if source not in shares_temporal:
+ shares_temporal[source] = {}
+ shares_temporal[source][name] = data
+ shares_temporal = calculate_all_conversion_paths(shares_temporal)
+
+ return shares_temporal, shares_periodic
+
+
+class EffectCollectionModel(Submodel):
"""
Handling all Effects
"""
- def __init__(self, model: SystemModel, effects: EffectCollection):
- super().__init__(model, label_of_element='Effects')
+ def __init__(self, model: FlowSystemModel, effects: EffectCollection):
self.effects = effects
self.penalty: ShareAllocationModel | None = None
+ super().__init__(model, label_of_element='Effects')
def add_share_to_effects(
self,
name: str,
- expressions: EffectValuesExpr,
- target: Literal['operation', 'invest'],
+ expressions: EffectExpr,
+ target: Literal['temporal', 'periodic'],
) -> None:
for effect, expression in expressions.items():
- if target == 'operation':
- self.effects[effect].model.operation.add_share(name, expression)
- elif target == 'invest':
- self.effects[effect].model.invest.add_share(name, expression)
+ if target == 'temporal':
+ self.effects[effect].submodel.temporal.add_share(
+ name,
+ expression,
+ dims=('time', 'period', 'scenario'),
+ )
+ elif target == 'periodic':
+ self.effects[effect].submodel.periodic.add_share(
+ name,
+ expression,
+ dims=('period', 'scenario'),
+ )
else:
raise ValueError(f'Target {target} not supported!')
def add_share_to_penalty(self, name: str, expression: linopy.LinearExpression) -> None:
if expression.ndim != 0:
raise TypeError(f'Penalty shares must be scalar expressions! ({expression.ndim=})')
- self.penalty.add_share(name, expression)
+ self.penalty.add_share(name, expression, dims=())
- def do_modeling(self):
+ def _do_modeling(self):
+ super()._do_modeling()
for effect in self.effects:
effect.create_model(self._model)
- self.penalty = self.add(
- ShareAllocationModel(self._model, shares_are_time_series=False, label_of_element='Penalty')
+ self.penalty = self.add_submodels(
+ ShareAllocationModel(self._model, dims=(), label_of_element='Penalty'),
+ short_name='penalty',
)
- for model in [effect.model for effect in self.effects] + [self.penalty]:
- model.do_modeling()
self._add_share_between_effects()
- self._model.add_objective(self.effects.objective_effect.model.total + self.penalty.total)
+ self._model.add_objective(
+ (self.effects.objective_effect.submodel.total * self._model.weights).sum() + self.penalty.total.sum()
+ )
def _add_share_between_effects(self):
- for origin_effect in self.effects:
- # 1. operation: -> hier sind es Zeitreihen (share_TS)
- for target_effect, time_series in origin_effect.specific_share_to_other_effects_operation.items():
- self.effects[target_effect].model.operation.add_share(
- origin_effect.model.operation.label_full,
- origin_effect.model.operation.total_per_timestep * time_series.active_data,
+ for target_effect in self.effects:
+ # 1. temporal: <- receiving temporal shares from other effects
+ for source_effect, time_series in target_effect.share_from_temporal.items():
+ target_effect.submodel.temporal.add_share(
+ self.effects[source_effect].submodel.temporal.label_full,
+ self.effects[source_effect].submodel.temporal.total_per_timestep * time_series,
+ dims=('time', 'period', 'scenario'),
)
- # 2. invest: -> hier ist es Scalar (share)
- for target_effect, factor in origin_effect.specific_share_to_other_effects_invest.items():
- self.effects[target_effect].model.invest.add_share(
- origin_effect.model.invest.label_full,
- origin_effect.model.invest.total * factor,
+ # 2. periodic: <- receiving periodic shares from other effects
+ for source_effect, factor in target_effect.share_from_periodic.items():
+ target_effect.submodel.periodic.add_share(
+ self.effects[source_effect].submodel.periodic.label_full,
+ self.effects[source_effect].submodel.periodic.total * factor,
+ dims=('period', 'scenario'),
)
+
+
+def calculate_all_conversion_paths(
+ conversion_dict: dict[str, dict[str, Scalar | xr.DataArray]],
+) -> dict[tuple[str, str], xr.DataArray]:
+ """
+ Calculates all possible direct and indirect conversion factors between units/domains.
+ This function uses Breadth-First Search (BFS) to find all possible conversion paths
+ between different units or domains in a conversion graph. It computes both direct
+ conversions (explicitly provided in the input) and indirect conversions (derived
+ through intermediate units).
+ Args:
+ conversion_dict: A nested dictionary where:
+ - Outer keys represent origin units/domains
+ - Inner dictionaries map target units/domains to their conversion factors
+ - Conversion factors can be integers, floats, or numpy arrays
+ Returns:
+ A dictionary mapping (origin, target) tuples to their respective conversion factors.
+ Each key is a tuple of strings representing the origin and target units/domains.
+ Each value is the conversion factor (int, float, or numpy array) from origin to target.
+ """
+ # Initialize the result dictionary to accumulate all paths
+ result = {}
+
+ # Add direct connections to the result first
+ for origin, targets in conversion_dict.items():
+ for target, factor in targets.items():
+ result[(origin, target)] = factor
+
+ # Track all paths by keeping path history to avoid cycles
+ # Iterate over each domain in the dictionary
+ for origin in conversion_dict:
+ # Keep track of visited paths to avoid repeating calculations
+ processed_paths = set()
+ # Use a queue with (current_domain, factor, path_history)
+ queue = deque([(origin, 1, [origin])])
+
+ while queue:
+ current_domain, factor, path = queue.popleft()
+
+ # Skip if we've processed this exact path before
+ path_key = tuple(path)
+ if path_key in processed_paths:
+ continue
+ processed_paths.add(path_key)
+
+ # Iterate over the neighbors of the current domain
+ for target, conversion_factor in conversion_dict.get(current_domain, {}).items():
+ # Skip if target would create a cycle
+ if target in path:
+ continue
+
+ # Calculate the indirect conversion factor
+ indirect_factor = factor * conversion_factor
+ new_path = path + [target]
+
+ # Only consider paths starting at origin and ending at some target
+ if len(new_path) > 2 and new_path[0] == origin:
+ # Update the result dictionary - accumulate factors from different paths
+ if (origin, target) in result:
+ result[(origin, target)] = result[(origin, target)] + indirect_factor
+ else:
+ result[(origin, target)] = indirect_factor
+
+ # Add new path to queue for further exploration
+ queue.append((target, indirect_factor, new_path))
+
+ # Convert all values to DataArrays
+ result = {key: value if isinstance(value, xr.DataArray) else xr.DataArray(value) for key, value in result.items()}
+
+ return result
+
+
+def detect_cycles(graph: dict[str, list[str]]) -> list[list[str]]:
+ """
+ Detects cycles in a directed graph using DFS.
+
+ Args:
+ graph: Adjacency list representation of the graph
+
+ Returns:
+ List of cycles found, where each cycle is a list of nodes
+ """
+ # Track nodes in current recursion stack
+ visiting = set()
+ # Track nodes that have been fully explored
+ visited = set()
+ # Store all found cycles
+ cycles = []
+
+ def dfs_find_cycles(node, path=None):
+ if path is None:
+ path = []
+
+ # Current path to this node
+ current_path = path + [node]
+ # Add node to current recursion stack
+ visiting.add(node)
+
+ # Check all neighbors
+ for neighbor in graph.get(node, []):
+ # If neighbor is in current path, we found a cycle
+ if neighbor in visiting:
+ # Get the cycle by extracting the relevant portion of the path
+ cycle_start = current_path.index(neighbor)
+ cycle = current_path[cycle_start:] + [neighbor]
+ cycles.append(cycle)
+ # If neighbor hasn't been fully explored, check it
+ elif neighbor not in visited:
+ dfs_find_cycles(neighbor, current_path)
+
+ # Remove node from current path and mark as fully explored
+ visiting.remove(node)
+ visited.add(node)
+
+ # Check each unvisited node
+ for node in graph:
+ if node not in visited:
+ dfs_find_cycles(node)
+
+ return cycles
+
+
+def tuples_to_adjacency_list(edges: list[tuple[str, str]]) -> dict[str, list[str]]:
+ """
+ Converts a list of edge tuples (source, target) to an adjacency list representation.
+
+ Args:
+ edges: List of (source, target) tuples representing directed edges
+
+ Returns:
+ Dictionary mapping each source node to a list of its target nodes
+ """
+ graph = {}
+
+ for source, target in edges:
+ if source not in graph:
+ graph[source] = []
+ graph[source].append(target)
+
+ # Ensure target nodes with no outgoing edges are in the graph
+ if target not in graph:
+ graph[target] = []
+
+ return graph
+
+
+# Backward compatibility aliases
+NonTemporalEffectsUser = PeriodicEffectsUser
+NonTemporalEffects = PeriodicEffects
diff --git a/flixopt/elements.py b/flixopt/elements.py
index 21783808c..25e399811 100644
--- a/flixopt/elements.py
+++ b/flixopt/elements.py
@@ -9,17 +9,19 @@
from typing import TYPE_CHECKING
import numpy as np
+import xarray as xr
from .config import CONFIG
-from .core import NumericData, NumericDataTS, PlausibilityError, Scalar
-from .features import InvestmentModel, OnOffModel, PreventSimultaneousUsageModel
+from .core import PlausibilityError, Scalar, TemporalData, TemporalDataUser
+from .features import InvestmentModel, OnOffModel
from .interface import InvestParameters, OnOffParameters
-from .structure import Element, ElementModel, SystemModel, register_class_for_io
+from .modeling import BoundingPatterns, ModelingPrimitives, ModelingUtilitiesAbstract
+from .structure import Element, ElementModel, FlowSystemModel, register_class_for_io
if TYPE_CHECKING:
import linopy
- from .effects import EffectValuesUser
+ from .effects import TemporalEffectsUser
from .flow_system import FlowSystem
logger = logging.getLogger('flixopt')
@@ -90,20 +92,18 @@ def __init__(
self.flows: dict[str, Flow] = {flow.label: flow for flow in self.inputs + self.outputs}
- def create_model(self, model: SystemModel) -> ComponentModel:
+ def create_model(self, model: FlowSystemModel) -> ComponentModel:
self._plausibility_checks()
- self.model = ComponentModel(model, self)
- return self.model
+ self.submodel = ComponentModel(model, self)
+ return self.submodel
- def transform_data(self, flow_system: FlowSystem) -> None:
+ def transform_data(self, flow_system: FlowSystem, name_prefix: str = '') -> None:
+ prefix = '|'.join(filter(None, [name_prefix, self.label_full]))
if self.on_off_parameters is not None:
- self.on_off_parameters.transform_data(flow_system, self.label_full)
+ self.on_off_parameters.transform_data(flow_system, prefix)
- def infos(self, use_numpy=True, use_element_label: bool = False) -> dict:
- infos = super().infos(use_numpy, use_element_label)
- infos['inputs'] = [flow.infos(use_numpy, use_element_label) for flow in self.inputs]
- infos['outputs'] = [flow.infos(use_numpy, use_element_label) for flow in self.outputs]
- return infos
+ for flow in self.inputs + self.outputs:
+ flow.transform_data(flow_system) # Flow doesnt need the name_prefix
def _check_unique_flow_labels(self):
all_flow_labels = [flow.label for flow in self.inputs + self.outputs]
@@ -126,6 +126,10 @@ class Bus(Element):
physical or logical connection points for energy carriers (electricity, heat, gas)
or material flows between different Components.
+ Mathematical Formulation:
+ See the complete mathematical model in the documentation:
+ [Bus](../user-guide/mathematical-notation/elements/Bus.md)
+
Args:
label: The label of the Element. Used to identify it in the FlowSystem.
excess_penalty_per_flow_hour: Penalty costs for bus balance violations.
@@ -177,7 +181,7 @@ class Bus(Element):
def __init__(
self,
label: str,
- excess_penalty_per_flow_hour: NumericData | NumericDataTS | None = 1e5,
+ excess_penalty_per_flow_hour: TemporalDataUser | None = 1e5,
meta_data: dict | None = None,
):
super().__init__(label, meta_data=meta_data)
@@ -185,14 +189,15 @@ def __init__(
self.inputs: list[Flow] = []
self.outputs: list[Flow] = []
- def create_model(self, model: SystemModel) -> BusModel:
+ def create_model(self, model: FlowSystemModel) -> BusModel:
self._plausibility_checks()
- self.model = BusModel(model, self)
- return self.model
+ self.submodel = BusModel(model, self)
+ return self.submodel
- def transform_data(self, flow_system: FlowSystem):
- self.excess_penalty_per_flow_hour = flow_system.create_time_series(
- f'{self.label_full}|excess_penalty_per_flow_hour', self.excess_penalty_per_flow_hour
+ def transform_data(self, flow_system: FlowSystem, name_prefix: str = '') -> None:
+ prefix = '|'.join(filter(None, [name_prefix, self.label_full]))
+ self.excess_penalty_per_flow_hour = flow_system.fit_to_model_coords(
+ f'{prefix}|excess_penalty_per_flow_hour', self.excess_penalty_per_flow_hour
)
def _plausibility_checks(self) -> None:
@@ -200,7 +205,7 @@ def _plausibility_checks(self) -> None:
zero_penalty = np.all(np.equal(self.excess_penalty_per_flow_hour, 0))
if zero_penalty:
logger.warning(
- f'In Bus {self.label}, the excess_penalty_per_flow_hour is 0. Use "None" or a value > 0.'
+ f'In Bus {self.label_full}, the excess_penalty_per_flow_hour is 0. Use "None" or a value > 0.'
)
@property
@@ -241,39 +246,27 @@ class Flow(Element):
- **InvestParameters**: Used for `size` when flow Size is an investment decision
- **OnOffParameters**: Used for `on_off_parameters` when flow has discrete states
+ Mathematical Formulation:
+ See the complete mathematical model in the documentation:
+ [Flow](../user-guide/mathematical-notation/elements/Flow.md)
+
Args:
- label: Unique identifier for the flow within its component.
- The full label combines component and flow labels.
- bus: Label of the bus this flow connects to. Must match a bus in the FlowSystem.
- size: Flow capacity or nominal rating. Can be:
- - Scalar value for fixed capacity
- - InvestParameters for investment-based sizing decisions
- - None to use large default value (CONFIG.Modeling.big)
- relative_minimum: Minimum flow rate as fraction of size.
- Example: 0.2 means flow cannot go below 20% of rated capacity.
- relative_maximum: Maximum flow rate as fraction of size (typically 1.0).
- Values >1.0 allow temporary overload operation.
- load_factor_min: Minimum average utilization over the time horizon (0-1).
- Calculated as total flow hours divided by (size × total time).
- load_factor_max: Maximum average utilization over the time horizon (0-1).
- Useful for equipment duty cycle limits or maintenance scheduling.
- effects_per_flow_hour: Operational costs and impacts per unit of flow-time.
- Dictionary mapping effect names to unit costs (e.g., fuel costs, emissions).
- on_off_parameters: Binary operation constraints using OnOffParameters.
- Enables modeling of startup costs, minimum run times, cycling limits.
- Only relevant when relative_minimum > 0 or discrete operation is required.
- flow_hours_total_max: Maximum cumulative flow-hours over time horizon.
- Alternative to load_factor_max for absolute energy/material limits.
- flow_hours_total_min: Minimum cumulative flow-hours over time horizon.
- Alternative to load_factor_min for contractual or operational requirements.
- fixed_relative_profile: Predetermined flow pattern as fraction of size.
- When specified, flow rate becomes: size × fixed_relative_profile(t).
- Used for: demand profiles, renewable generation, fixed schedules.
- previous_flow_rate: Initial flow state for startup/shutdown dynamics.
- Used with on_off_parameters to determine initial on/off status.
- If None, assumes flow was off in previous time period.
- meta_data: Additional information stored with results but not used in optimization.
- Must contain only Python native types (dict, list, str, int, float, bool).
+ label: Unique flow identifier within its component.
+ bus: Bus label this flow connects to.
+ size: Flow capacity. Scalar, InvestParameters, or None (uses CONFIG.Modeling.big).
+ relative_minimum: Minimum flow rate as fraction of size (0-1). Default: 0.
+ relative_maximum: Maximum flow rate as fraction of size. Default: 1.
+ load_factor_min: Minimum average utilization (0-1). Default: 0.
+ load_factor_max: Maximum average utilization (0-1). Default: 1.
+ effects_per_flow_hour: Operational costs/impacts per flow-hour.
+ Dict mapping effect names to values (e.g., {'cost': 45, 'CO2': 0.8}).
+ on_off_parameters: Binary operation constraints (OnOffParameters). Default: None.
+ flow_hours_total_max: Maximum cumulative flow-hours. Alternative to load_factor_max.
+ flow_hours_total_min: Minimum cumulative flow-hours. Alternative to load_factor_min.
+ fixed_relative_profile: Predetermined pattern as fraction of size.
+ Flow rate = size × fixed_relative_profile(t).
+ previous_flow_rate: Initial flow state for on/off dynamics. Default: None (off).
+ meta_data: Additional info stored in results. Python native types only.
Examples:
Basic power flow with fixed capacity:
@@ -315,7 +308,7 @@ class Flow(Element):
effects_per_switch_on={'startup_cost': 100, 'wear': 0.1},
consecutive_on_hours_min=2, # Must run at least 2 hours
consecutive_off_hours_min=1, # Must stay off at least 1 hour
- switch_on_total_max=200, # Maximum 200 starts per year
+ switch_on_total_max=200, # Maximum 200 starts per period
),
)
```
@@ -369,17 +362,17 @@ def __init__(
self,
label: str,
bus: str,
- size: Scalar | InvestParameters | None = None,
- fixed_relative_profile: NumericDataTS | None = None,
- relative_minimum: NumericDataTS = 0,
- relative_maximum: NumericDataTS = 1,
- effects_per_flow_hour: EffectValuesUser | None = None,
+ size: Scalar | InvestParameters = None,
+ fixed_relative_profile: TemporalDataUser | None = None,
+ relative_minimum: TemporalDataUser = 0,
+ relative_maximum: TemporalDataUser = 1,
+ effects_per_flow_hour: TemporalEffectsUser | None = None,
on_off_parameters: OnOffParameters | None = None,
flow_hours_total_max: Scalar | None = None,
flow_hours_total_min: Scalar | None = None,
load_factor_min: Scalar | None = None,
load_factor_max: Scalar | None = None,
- previous_flow_rate: NumericData | None = None,
+ previous_flow_rate: Scalar | list[Scalar] | None = None,
meta_data: dict | None = None,
):
super().__init__(label, meta_data=meta_data)
@@ -396,9 +389,7 @@ def __init__(
self.flow_hours_total_min = flow_hours_total_min
self.on_off_parameters = on_off_parameters
- self.previous_flow_rate = (
- previous_flow_rate if not isinstance(previous_flow_rate, list) else np.array(previous_flow_rate)
- )
+ self.previous_flow_rate = previous_flow_rate
self.component: str = 'UnknownComponent'
self.is_input_in_component: bool | None = None
@@ -415,68 +406,80 @@ def __init__(
self.bus = bus
self._bus_object = None
- def create_model(self, model: SystemModel) -> FlowModel:
+ def create_model(self, model: FlowSystemModel) -> FlowModel:
self._plausibility_checks()
- self.model = FlowModel(model, self)
- return self.model
-
- def transform_data(self, flow_system: FlowSystem):
- self.relative_minimum = flow_system.create_time_series(
- f'{self.label_full}|relative_minimum', self.relative_minimum
+ self.submodel = FlowModel(model, self)
+ return self.submodel
+
+ def transform_data(self, flow_system: FlowSystem, name_prefix: str = '') -> None:
+ prefix = '|'.join(filter(None, [name_prefix, self.label_full]))
+ self.relative_minimum = flow_system.fit_to_model_coords(f'{prefix}|relative_minimum', self.relative_minimum)
+ self.relative_maximum = flow_system.fit_to_model_coords(f'{prefix}|relative_maximum', self.relative_maximum)
+ self.fixed_relative_profile = flow_system.fit_to_model_coords(
+ f'{prefix}|fixed_relative_profile', self.fixed_relative_profile
+ )
+ self.effects_per_flow_hour = flow_system.fit_effects_to_model_coords(
+ prefix, self.effects_per_flow_hour, 'per_flow_hour'
+ )
+ self.flow_hours_total_max = flow_system.fit_to_model_coords(
+ f'{prefix}|flow_hours_total_max', self.flow_hours_total_max, dims=['period', 'scenario']
)
- self.relative_maximum = flow_system.create_time_series(
- f'{self.label_full}|relative_maximum', self.relative_maximum
+ self.flow_hours_total_min = flow_system.fit_to_model_coords(
+ f'{prefix}|flow_hours_total_min', self.flow_hours_total_min, dims=['period', 'scenario']
)
- self.fixed_relative_profile = flow_system.create_time_series(
- f'{self.label_full}|fixed_relative_profile', self.fixed_relative_profile
+ self.load_factor_max = flow_system.fit_to_model_coords(
+ f'{prefix}|load_factor_max', self.load_factor_max, dims=['period', 'scenario']
)
- self.effects_per_flow_hour = flow_system.create_effect_time_series(
- self.label_full, self.effects_per_flow_hour, 'per_flow_hour'
+ self.load_factor_min = flow_system.fit_to_model_coords(
+ f'{prefix}|load_factor_min', self.load_factor_min, dims=['period', 'scenario']
)
+
if self.on_off_parameters is not None:
- self.on_off_parameters.transform_data(flow_system, self.label_full)
+ self.on_off_parameters.transform_data(flow_system, prefix)
if isinstance(self.size, InvestParameters):
- self.size.transform_data(flow_system)
-
- def infos(self, use_numpy: bool = True, use_element_label: bool = False) -> dict:
- infos = super().infos(use_numpy, use_element_label)
- infos['is_input_in_component'] = self.is_input_in_component
- return infos
-
- def to_dict(self) -> dict:
- data = super().to_dict()
- if isinstance(data.get('previous_flow_rate'), np.ndarray):
- data['previous_flow_rate'] = data['previous_flow_rate'].tolist()
- return data
+ self.size.transform_data(flow_system, prefix)
+ else:
+ self.size = flow_system.fit_to_model_coords(f'{prefix}|size', self.size, dims=['period', 'scenario'])
def _plausibility_checks(self) -> None:
# TODO: Incorporate into Variable? (Lower_bound can not be greater than upper bound
- if np.any(self.relative_minimum > self.relative_maximum):
+ if (self.relative_minimum > self.relative_maximum).any():
raise PlausibilityError(self.label_full + ': Take care, that relative_minimum <= relative_maximum!')
- if (
- self.size == CONFIG.Modeling.big and self.fixed_relative_profile is not None
+ if not isinstance(self.size, InvestParameters) and (
+ np.any(self.size == CONFIG.Modeling.big) and self.fixed_relative_profile is not None
): # Default Size --> Most likely by accident
logger.warning(
- f'Flow "{self.label}" has no size assigned, but a "fixed_relative_profile". '
+ f'Flow "{self.label_full}" has no size assigned, but a "fixed_relative_profile". '
f'The default size is {CONFIG.Modeling.big}. As "flow_rate = size * fixed_relative_profile", '
f'the resulting flow_rate will be very high. To fix this, assign a size to the Flow {self}.'
)
if self.fixed_relative_profile is not None and self.on_off_parameters is not None:
- raise ValueError(
- f'Flow {self.label} has both a fixed_relative_profile and an on_off_parameters. This is not supported. '
- f'Use relative_minimum and relative_maximum instead, '
- f'if you want to allow flows to be switched on and off.'
+ logger.warning(
+ f'Flow {self.label_full} has both a fixed_relative_profile and an on_off_parameters.'
+ f'This will allow the flow to be switched on and off, effectively differing from the fixed_flow_rate.'
)
- if (self.relative_minimum > 0).any() and self.on_off_parameters is None:
+ if np.any(self.relative_minimum > 0) and self.on_off_parameters is None:
logger.warning(
- f'Flow {self.label} has a relative_minimum of {self.relative_minimum.active_data} and no on_off_parameters. '
+ f'Flow {self.label_full} has a relative_minimum of {self.relative_minimum} and no on_off_parameters. '
f'This prevents the flow_rate from switching off (flow_rate = 0). '
f'Consider using on_off_parameters to allow the flow to be switched on and off.'
)
+ if self.previous_flow_rate is not None:
+ if not any(
+ [
+ isinstance(self.previous_flow_rate, np.ndarray) and self.previous_flow_rate.ndim == 1,
+ isinstance(self.previous_flow_rate, (int, float, list)),
+ ]
+ ):
+ raise TypeError(
+ f'previous_flow_rate must be None, a scalar, a list of scalars or a 1D-numpy-array. Got {type(self.previous_flow_rate)}. '
+ f'Different values in different periods or scenarios are not yet supported.'
+ )
+
@property
def label_full(self) -> str:
return f'{self.component}({self.label})'
@@ -486,237 +489,294 @@ def size_is_fixed(self) -> bool:
# Wenn kein InvestParameters existiert --> True; Wenn Investparameter, den Wert davon nehmen
return False if (isinstance(self.size, InvestParameters) and self.size.fixed_size is None) else True
- @property
- def invest_is_optional(self) -> bool:
- # Wenn kein InvestParameters existiert: # Investment ist nicht optional -> Keine Variable --> False
- return False if (isinstance(self.size, InvestParameters) and not self.size.optional) else True
-
class FlowModel(ElementModel):
- def __init__(self, model: SystemModel, element: Flow):
+ element: Flow # Type hint
+
+ def __init__(self, model: FlowSystemModel, element: Flow):
super().__init__(model, element)
- self.element: Flow = element
- self.flow_rate: linopy.Variable | None = None
- self.total_flow_hours: linopy.Variable | None = None
- self.on_off: OnOffModel | None = None
- self._investment: InvestmentModel | None = None
-
- def do_modeling(self):
- # eq relative_minimum(t) * size <= flow_rate(t) <= relative_maximum(t) * size
- self.flow_rate: linopy.Variable = self.add(
- self._model.add_variables(
- lower=self.flow_rate_lower_bound,
- upper=self.flow_rate_upper_bound,
- coords=self._model.coords,
- name=f'{self.label_full}|flow_rate',
+ def _do_modeling(self):
+ super()._do_modeling()
+ # Main flow rate variable
+ self.add_variables(
+ lower=self.absolute_flow_rate_bounds[0],
+ upper=self.absolute_flow_rate_bounds[1],
+ coords=self._model.get_coords(),
+ short_name='flow_rate',
+ )
+
+ self._constraint_flow_rate()
+
+ # Total flow hours tracking
+ ModelingPrimitives.expression_tracking_variable(
+ model=self,
+ name=f'{self.label_full}|total_flow_hours',
+ tracked_expression=(self.flow_rate * self._model.hours_per_step).sum('time'),
+ bounds=(
+ self.element.flow_hours_total_min if self.element.flow_hours_total_min is not None else 0,
+ self.element.flow_hours_total_max if self.element.flow_hours_total_max is not None else None,
),
- 'flow_rate',
+ coords=['period', 'scenario'],
+ short_name='total_flow_hours',
)
- # OnOff
- if self.element.on_off_parameters is not None:
- self.on_off: OnOffModel = self.add(
- OnOffModel(
- model=self._model,
- label_of_element=self.label_of_element,
- on_off_parameters=self.element.on_off_parameters,
- defining_variables=[self.flow_rate],
- defining_bounds=[self.flow_rate_bounds_on],
- previous_values=[self.element.previous_flow_rate],
- ),
- 'on_off',
- )
- self.on_off.do_modeling()
+ # Load factor constraints
+ self._create_bounds_for_load_factor()
- # Investment
- if isinstance(self.element.size, InvestParameters):
- self._investment: InvestmentModel = self.add(
- InvestmentModel(
- model=self._model,
- label_of_element=self.label_of_element,
- parameters=self.element.size,
- defining_variable=self.flow_rate,
- relative_bounds_of_defining_variable=(
- self.flow_rate_lower_bound_relative,
- self.flow_rate_upper_bound_relative,
- ),
- on_variable=self.on_off.on if self.on_off is not None else None,
- ),
- 'investment',
- )
- self._investment.do_modeling()
-
- self.total_flow_hours = self.add(
- self._model.add_variables(
- lower=self.element.flow_hours_total_min if self.element.flow_hours_total_min is not None else 0,
- upper=self.element.flow_hours_total_max if self.element.flow_hours_total_max is not None else np.inf,
- coords=None,
- name=f'{self.label_full}|total_flow_hours',
+ # Effects
+ self._create_shares()
+
+ def _create_on_off_model(self):
+ on = self.add_variables(binary=True, short_name='on', coords=self._model.get_coords())
+ self.add_submodels(
+ OnOffModel(
+ model=self._model,
+ label_of_element=self.label_of_element,
+ parameters=self.element.on_off_parameters,
+ on_variable=on,
+ previous_states=self.previous_states,
+ label_of_model=self.label_of_element,
),
- 'total_flow_hours',
+ short_name='on_off',
)
- self.add(
- self._model.add_constraints(
- self.total_flow_hours == (self.flow_rate * self._model.hours_per_step).sum(),
- name=f'{self.label_full}|total_flow_hours',
+ def _create_investment_model(self):
+ self.add_submodels(
+ InvestmentModel(
+ model=self._model,
+ label_of_element=self.label_of_element,
+ parameters=self.element.size,
+ label_of_model=self.label_of_element,
),
- 'total_flow_hours',
+ 'investment',
)
- # Load factor
- self._create_bounds_for_load_factor()
+ def _constraint_flow_rate(self):
+ if not self.with_investment and not self.with_on_off:
+ # Most basic case. Already covered by direct variable bounds
+ pass
+
+ elif self.with_on_off and not self.with_investment:
+ # OnOff, but no Investment
+ self._create_on_off_model()
+ bounds = self.relative_flow_rate_bounds
+ BoundingPatterns.bounds_with_state(
+ self,
+ variable=self.flow_rate,
+ bounds=(bounds[0] * self.element.size, bounds[1] * self.element.size),
+ variable_state=self.on_off.on,
+ )
- # Shares
- self._create_shares()
+ elif self.with_investment and not self.with_on_off:
+ # Investment, but no OnOff
+ self._create_investment_model()
+ BoundingPatterns.scaled_bounds(
+ self,
+ variable=self.flow_rate,
+ scaling_variable=self.investment.size,
+ relative_bounds=self.relative_flow_rate_bounds,
+ )
+
+ elif self.with_investment and self.with_on_off:
+ # Investment and OnOff
+ self._create_investment_model()
+ self._create_on_off_model()
+
+ BoundingPatterns.scaled_bounds_with_state(
+ model=self,
+ variable=self.flow_rate,
+ scaling_variable=self._investment.size,
+ relative_bounds=self.relative_flow_rate_bounds,
+ scaling_bounds=(self.element.size.minimum_or_fixed_size, self.element.size.maximum_or_fixed_size),
+ variable_state=self.on_off.on,
+ )
+ else:
+ raise Exception('Not valid')
+
+ @property
+ def with_on_off(self) -> bool:
+ return self.element.on_off_parameters is not None
+
+ @property
+ def with_investment(self) -> bool:
+ return isinstance(self.element.size, InvestParameters)
+
+ # Properties for clean access to variables
+ @property
+ def flow_rate(self) -> linopy.Variable:
+ """Main flow rate variable"""
+ return self['flow_rate']
+
+ @property
+ def total_flow_hours(self) -> linopy.Variable:
+ """Total flow hours variable"""
+ return self['total_flow_hours']
+
+ def results_structure(self):
+ return {
+ **super().results_structure(),
+ 'start': self.element.bus if self.element.is_input_in_component else self.element.component,
+ 'end': self.element.component if self.element.is_input_in_component else self.element.bus,
+ 'component': self.element.component,
+ }
def _create_shares(self):
- # Arbeitskosten:
- if self.element.effects_per_flow_hour != {}:
+ # Effects per flow hour
+ if self.element.effects_per_flow_hour:
self._model.effects.add_share_to_effects(
- name=self.label_full, # Use the full label of the element
+ name=self.label_full,
expressions={
- effect: self.flow_rate * self._model.hours_per_step * factor.active_data
+ effect: self.flow_rate * self._model.hours_per_step * factor
for effect, factor in self.element.effects_per_flow_hour.items()
},
- target='operation',
+ target='temporal',
)
def _create_bounds_for_load_factor(self):
- # TODO: Add Variable load_factor for better evaluation?
+ """Create load factor constraints using current approach"""
+ # Get the size (either from element or investment)
+ size = self.investment.size if self.with_investment else self.element.size
- # eq: var_sumFlowHours <= size * dt_tot * load_factor_max
+ # Maximum load factor constraint
if self.element.load_factor_max is not None:
- name_short = 'load_factor_max'
- flow_hours_per_size_max = self._model.hours_per_step.sum() * self.element.load_factor_max
- size = self.element.size if self._investment is None else self._investment.size
-
- self.add(
- self._model.add_constraints(
- self.total_flow_hours <= size * flow_hours_per_size_max,
- name=f'{self.label_full}|{name_short}',
- ),
- name_short,
+ flow_hours_per_size_max = self._model.hours_per_step.sum('time') * self.element.load_factor_max
+ self.add_constraints(
+ self.total_flow_hours <= size * flow_hours_per_size_max,
+ short_name='load_factor_max',
)
- # eq: size * sum(dt)* load_factor_min <= var_sumFlowHours
+ # Minimum load factor constraint
if self.element.load_factor_min is not None:
- name_short = 'load_factor_min'
- flow_hours_per_size_min = self._model.hours_per_step.sum() * self.element.load_factor_min
- size = self.element.size if self._investment is None else self._investment.size
-
- self.add(
- self._model.add_constraints(
- self.total_flow_hours >= size * flow_hours_per_size_min,
- name=f'{self.label_full}|{name_short}',
- ),
- name_short,
+ flow_hours_per_size_min = self._model.hours_per_step.sum('time') * self.element.load_factor_min
+ self.add_constraints(
+ self.total_flow_hours >= size * flow_hours_per_size_min,
+ short_name='load_factor_min',
)
@property
- def flow_rate_bounds_on(self) -> tuple[NumericData, NumericData]:
- """Returns absolute flow rate bounds. Important for OnOffModel"""
- relative_minimum, relative_maximum = self.flow_rate_lower_bound_relative, self.flow_rate_upper_bound_relative
- size = self.element.size
- if not isinstance(size, InvestParameters):
- return relative_minimum * size, relative_maximum * size
- if size.fixed_size is not None:
- return relative_minimum * size.fixed_size, relative_maximum * size.fixed_size
- return relative_minimum * size.minimum_size, relative_maximum * size.maximum_size
+ def relative_flow_rate_bounds(self) -> tuple[TemporalData, TemporalData]:
+ if self.element.fixed_relative_profile is not None:
+ return self.element.fixed_relative_profile, self.element.fixed_relative_profile
+ return self.element.relative_minimum, self.element.relative_maximum
@property
- def flow_rate_lower_bound_relative(self) -> NumericData:
- """Returns the lower bound of the flow_rate relative to its size"""
- fixed_profile = self.element.fixed_relative_profile
- if fixed_profile is None:
- return self.element.relative_minimum.active_data
- return fixed_profile.active_data
+ def absolute_flow_rate_bounds(self) -> tuple[TemporalData, TemporalData]:
+ """
+ Returns the absolute bounds the flow_rate can reach.
+ Further constraining might be needed
+ """
+ lb_relative, ub_relative = self.relative_flow_rate_bounds
+
+ lb = 0
+ if not self.with_on_off:
+ if not self.with_investment:
+ # Basic case without investment and without OnOff
+ lb = lb_relative * self.element.size
+ elif self.with_investment and self.element.size.mandatory:
+ # With mandatory Investment
+ lb = lb_relative * self.element.size.minimum_or_fixed_size
+
+ if self.with_investment:
+ ub = ub_relative * self.element.size.maximum_or_fixed_size
+ else:
+ ub = ub_relative * self.element.size
+
+ return lb, ub
@property
- def flow_rate_upper_bound_relative(self) -> NumericData:
- """Returns the upper bound of the flow_rate relative to its size"""
- fixed_profile = self.element.fixed_relative_profile
- if fixed_profile is None:
- return self.element.relative_maximum.active_data
- return fixed_profile.active_data
+ def on_off(self) -> OnOffModel | None:
+ """OnOff feature"""
+ if 'on_off' not in self.submodels:
+ return None
+ return self.submodels['on_off']
@property
- def flow_rate_lower_bound(self) -> NumericData:
- """
- Returns the minimum bound the flow_rate can reach.
- Further constraining might be done in OnOffModel and InvestmentModel
- """
- if self.element.on_off_parameters is not None:
- return 0
- if isinstance(self.element.size, InvestParameters):
- if self.element.size.optional:
- return 0
- return self.flow_rate_lower_bound_relative * self.element.size.minimum_size
- return self.flow_rate_lower_bound_relative * self.element.size
+ def _investment(self) -> InvestmentModel | None:
+ """Deprecated alias for investment"""
+ return self.investment
@property
- def flow_rate_upper_bound(self) -> NumericData:
- """
- Returns the maximum bound the flow_rate can reach.
- Further constraining might be done in OnOffModel and InvestmentModel
- """
- if isinstance(self.element.size, InvestParameters):
- return self.flow_rate_upper_bound_relative * self.element.size.maximum_size
- return self.flow_rate_upper_bound_relative * self.element.size
+ def investment(self) -> InvestmentModel | None:
+ """OnOff feature"""
+ if 'investment' not in self.submodels:
+ return None
+ return self.submodels['investment']
+
+ @property
+ def previous_states(self) -> TemporalData | None:
+ """Previous states of the flow rate"""
+ # TODO: This would be nicer to handle in the Flow itself, and allow DataArrays as well.
+ previous_flow_rate = self.element.previous_flow_rate
+ if previous_flow_rate is None:
+ return None
+
+ return ModelingUtilitiesAbstract.to_binary(
+ values=xr.DataArray(
+ [previous_flow_rate] if np.isscalar(previous_flow_rate) else previous_flow_rate, dims='time'
+ ),
+ epsilon=CONFIG.Modeling.epsilon,
+ dims='time',
+ )
class BusModel(ElementModel):
- def __init__(self, model: SystemModel, element: Bus):
- super().__init__(model, element)
- self.element: Bus = element
+ element: Bus # Type hint
+
+ def __init__(self, model: FlowSystemModel, element: Bus):
self.excess_input: linopy.Variable | None = None
self.excess_output: linopy.Variable | None = None
+ super().__init__(model, element)
- def do_modeling(self) -> None:
+ def _do_modeling(self) -> None:
+ super()._do_modeling()
# inputs == outputs
for flow in self.element.inputs + self.element.outputs:
- self.add(flow.model.flow_rate, flow.label_full)
- inputs = sum([flow.model.flow_rate for flow in self.element.inputs])
- outputs = sum([flow.model.flow_rate for flow in self.element.outputs])
- eq_bus_balance = self.add(self._model.add_constraints(inputs == outputs, name=f'{self.label_full}|balance'))
+ self.register_variable(flow.submodel.flow_rate, flow.label_full)
+ inputs = sum([flow.submodel.flow_rate for flow in self.element.inputs])
+ outputs = sum([flow.submodel.flow_rate for flow in self.element.outputs])
+ eq_bus_balance = self.add_constraints(inputs == outputs, short_name='balance')
# Fehlerplus/-minus:
if self.element.with_excess:
- excess_penalty = np.multiply(
- self._model.hours_per_step, self.element.excess_penalty_per_flow_hour.active_data
- )
- self.excess_input = self.add(
- self._model.add_variables(lower=0, coords=self._model.coords, name=f'{self.label_full}|excess_input'),
- 'excess_input',
- )
- self.excess_output = self.add(
- self._model.add_variables(lower=0, coords=self._model.coords, name=f'{self.label_full}|excess_output'),
- 'excess_output',
+ excess_penalty = np.multiply(self._model.hours_per_step, self.element.excess_penalty_per_flow_hour)
+
+ self.excess_input = self.add_variables(lower=0, coords=self._model.get_coords(), short_name='excess_input')
+
+ self.excess_output = self.add_variables(
+ lower=0, coords=self._model.get_coords(), short_name='excess_output'
)
+
eq_bus_balance.lhs -= -self.excess_input + self.excess_output
self._model.effects.add_share_to_penalty(self.label_of_element, (self.excess_input * excess_penalty).sum())
self._model.effects.add_share_to_penalty(self.label_of_element, (self.excess_output * excess_penalty).sum())
def results_structure(self):
- inputs = [flow.model.flow_rate.name for flow in self.element.inputs]
- outputs = [flow.model.flow_rate.name for flow in self.element.outputs]
+ inputs = [flow.submodel.flow_rate.name for flow in self.element.inputs]
+ outputs = [flow.submodel.flow_rate.name for flow in self.element.outputs]
if self.excess_input is not None:
inputs.append(self.excess_input.name)
if self.excess_output is not None:
outputs.append(self.excess_output.name)
- return {**super().results_structure(), 'inputs': inputs, 'outputs': outputs}
+ return {
+ **super().results_structure(),
+ 'inputs': inputs,
+ 'outputs': outputs,
+ 'flows': [flow.label_full for flow in self.element.inputs + self.element.outputs],
+ }
class ComponentModel(ElementModel):
- def __init__(self, model: SystemModel, element: Component):
- super().__init__(model, element)
- self.element: Component = element
+ element: Component # Type hint
+
+ def __init__(self, model: FlowSystemModel, element: Component):
self.on_off: OnOffModel | None = None
+ super().__init__(model, element)
- def do_modeling(self):
+ def _do_modeling(self):
"""Initiates all FlowModels"""
+ super()._do_modeling()
all_flows = self.element.inputs + self.element.outputs
if self.element.on_off_parameters:
for flow in all_flows:
@@ -729,34 +789,64 @@ def do_modeling(self):
flow.on_off_parameters = OnOffParameters()
for flow in all_flows:
- self.add(flow.create_model(self._model), flow.label)
-
- for sub_model in self.sub_models:
- sub_model.do_modeling()
+ self.add_submodels(flow.create_model(self._model), short_name=flow.label)
if self.element.on_off_parameters:
- self.on_off = self.add(
- OnOffModel(
- self._model,
- self.element.on_off_parameters,
- self.label_of_element,
- defining_variables=[flow.model.flow_rate for flow in all_flows],
- defining_bounds=[flow.model.flow_rate_bounds_on for flow in all_flows],
- previous_values=[flow.previous_flow_rate for flow in all_flows],
+ on = self.add_variables(binary=True, short_name='on', coords=self._model.get_coords())
+ if len(all_flows) == 1:
+ self.add_constraints(on == all_flows[0].submodel.on_off.on, short_name='on')
+ else:
+ flow_ons = [flow.submodel.on_off.on for flow in all_flows]
+ # TODO: Is the EPSILON even necessary?
+ self.add_constraints(on <= sum(flow_ons) + CONFIG.Modeling.epsilon, short_name='on|ub')
+ self.add_constraints(
+ on >= sum(flow_ons) / (len(flow_ons) + CONFIG.Modeling.epsilon), short_name='on|lb'
)
- )
- self.on_off.do_modeling()
+ self.on_off = self.add_submodels(
+ OnOffModel(
+ model=self._model,
+ label_of_element=self.label_of_element,
+ parameters=self.element.on_off_parameters,
+ on_variable=on,
+ label_of_model=self.label_of_element,
+ previous_states=self.previous_states,
+ ),
+ short_name='on_off',
+ )
if self.element.prevent_simultaneous_flows:
# Simultanious Useage --> Only One FLow is On at a time, but needs a Binary for every flow
- on_variables = [flow.model.on_off.on for flow in self.element.prevent_simultaneous_flows]
- simultaneous_use = self.add(PreventSimultaneousUsageModel(self._model, on_variables, self.label_full))
- simultaneous_use.do_modeling()
+ ModelingPrimitives.mutual_exclusivity_constraint(
+ self,
+ binary_variables=[flow.submodel.on_off.on for flow in self.element.prevent_simultaneous_flows],
+ short_name='prevent_simultaneous_use',
+ )
def results_structure(self):
return {
**super().results_structure(),
- 'inputs': [flow.model.flow_rate.name for flow in self.element.inputs],
- 'outputs': [flow.model.flow_rate.name for flow in self.element.outputs],
+ 'inputs': [flow.submodel.flow_rate.name for flow in self.element.inputs],
+ 'outputs': [flow.submodel.flow_rate.name for flow in self.element.outputs],
+ 'flows': [flow.label_full for flow in self.element.inputs + self.element.outputs],
}
+
+ @property
+ def previous_states(self) -> xr.DataArray | None:
+ """Previous state of the component, derived from its flows"""
+ if self.element.on_off_parameters is None:
+ raise ValueError(f'OnOffModel not present in \n{self}\nCant access previous_states')
+
+ previous_states = [flow.submodel.on_off._previous_states for flow in self.element.inputs + self.element.outputs]
+ previous_states = [da for da in previous_states if da is not None]
+
+ if not previous_states: # Empty list
+ return None
+
+ max_len = max(da.sizes['time'] for da in previous_states)
+
+ padded_previous_states = [
+ da.assign_coords(time=range(-da.sizes['time'], 0)).reindex(time=range(-max_len, 0), fill_value=0)
+ for da in previous_states
+ ]
+ return xr.concat(padded_previous_states, dim='flow').any(dim='flow').astype(int)
diff --git a/flixopt/features.py b/flixopt/features.py
index 7aafe242d..0d1fc7784 100644
--- a/flixopt/features.py
+++ b/flixopt/features.py
@@ -11,837 +11,366 @@
import linopy
import numpy as np
-from .config import CONFIG
-from .core import NumericData, Scalar, TimeSeries
-from .structure import Model, SystemModel
+from .modeling import BoundingPatterns, ModelingPrimitives, ModelingUtilities
+from .structure import FlowSystemModel, Submodel
if TYPE_CHECKING:
+ from .core import FlowSystemDimensions, Scalar, TemporalData
from .interface import InvestParameters, OnOffParameters, Piecewise
logger = logging.getLogger('flixopt')
-class InvestmentModel(Model):
- """Class for modeling an investment"""
+class InvestmentModel(Submodel):
+ """
+ This feature model is used to model the investment of a variable.
+ It applies the corresponding bounds to the variable and the on/off state of the variable.
+
+ Args:
+ model: The optimization model instance
+ label_of_element: The label of the parent (Element). Used to construct the full label of the model.
+ parameters: The parameters of the feature model.
+ label_of_model: The label of the model. This is needed to construct the full label of the model.
+
+ """
+
+ parameters: InvestParameters
def __init__(
self,
- model: SystemModel,
+ model: FlowSystemModel,
label_of_element: str,
parameters: InvestParameters,
- defining_variable: linopy.Variable,
- relative_bounds_of_defining_variable: tuple[NumericData, NumericData],
- label: str | None = None,
- on_variable: linopy.Variable | None = None,
+ label_of_model: str | None = None,
):
- super().__init__(model, label_of_element, label)
- self.size: Scalar | linopy.Variable | None = None
- self.is_invested: linopy.Variable | None = None
-
self.piecewise_effects: PiecewiseEffectsModel | None = None
-
- self._on_variable = on_variable
- self._defining_variable = defining_variable
- self._relative_bounds_of_defining_variable = relative_bounds_of_defining_variable
self.parameters = parameters
+ super().__init__(model, label_of_element=label_of_element, label_of_model=label_of_model)
+
+ def _do_modeling(self):
+ super()._do_modeling()
+ self._create_variables_and_constraints()
+ self._add_effects()
+
+ def _create_variables_and_constraints(self):
+ size_min, size_max = (self.parameters.minimum_or_fixed_size, self.parameters.maximum_or_fixed_size)
+ if self.parameters.linked_periods is not None:
+ # Mask size bounds: linked_periods is a binary DataArray that zeros out non-linked periods
+ size_min = size_min * self.parameters.linked_periods
+ size_max = size_max * self.parameters.linked_periods
+
+ self.add_variables(
+ short_name='size',
+ lower=size_min if self.parameters.mandatory else 0,
+ upper=size_max,
+ coords=self._model.get_coords(['period', 'scenario']),
+ )
- def do_modeling(self):
- if self.parameters.fixed_size and not self.parameters.optional:
- self.size = self.add(
- self._model.add_variables(
- lower=self.parameters.fixed_size, upper=self.parameters.fixed_size, name=f'{self.label_full}|size'
- ),
- 'size',
+ if not self.parameters.mandatory:
+ self.add_variables(
+ binary=True,
+ coords=self._model.get_coords(['period', 'scenario']),
+ short_name='invested',
)
- else:
- self.size = self.add(
- self._model.add_variables(
- lower=0 if self.parameters.optional else self.parameters.minimum_size,
- upper=self.parameters.maximum_size,
- name=f'{self.label_full}|size',
- ),
- 'size',
+ BoundingPatterns.bounds_with_state(
+ self,
+ variable=self.size,
+ variable_state=self._variables['invested'],
+ bounds=(self.parameters.minimum_or_fixed_size, self.parameters.maximum_or_fixed_size),
)
- # Optional
- if self.parameters.optional:
- self.is_invested = self.add(
- self._model.add_variables(binary=True, name=f'{self.label_full}|is_invested'), 'is_invested'
+ if self.parameters.linked_periods is not None:
+ masked_size = self.size.where(self.parameters.linked_periods, drop=True)
+ self.add_constraints(
+ masked_size.isel(period=slice(None, -1)) == masked_size.isel(period=slice(1, None)),
+ short_name='linked_periods',
)
- self._create_bounds_for_optional_investment()
-
- # Bounds for defining variable
- self._create_bounds_for_defining_variable()
-
- self._create_shares()
-
- def _create_shares(self):
- # fix_effects:
- fix_effects = self.parameters.fix_effects
- if fix_effects != {}:
+ def _add_effects(self):
+ """Add investment effects"""
+ if self.parameters.effects_of_investment:
self._model.effects.add_share_to_effects(
name=self.label_of_element,
expressions={
- effect: self.is_invested * factor if self.is_invested is not None else factor
- for effect, factor in fix_effects.items()
+ effect: self.invested * factor if self.invested is not None else factor
+ for effect, factor in self.parameters.effects_of_investment.items()
},
- target='invest',
+ target='periodic',
)
- if self.parameters.divest_effects != {} and self.parameters.optional:
- # share: divest_effects - isInvested * divest_effects
+ if self.parameters.effects_of_retirement and not self.parameters.mandatory:
self._model.effects.add_share_to_effects(
name=self.label_of_element,
expressions={
- effect: -self.is_invested * factor + factor
- for effect, factor in self.parameters.divest_effects.items()
+ effect: -self.invested * factor + factor
+ for effect, factor in self.parameters.effects_of_retirement.items()
},
- target='invest',
+ target='periodic',
)
- if self.parameters.specific_effects != {}:
+ if self.parameters.effects_of_investment_per_size:
self._model.effects.add_share_to_effects(
name=self.label_of_element,
- expressions={effect: self.size * factor for effect, factor in self.parameters.specific_effects.items()},
- target='invest',
+ expressions={
+ effect: self.size * factor
+ for effect, factor in self.parameters.effects_of_investment_per_size.items()
+ },
+ target='periodic',
)
- if self.parameters.piecewise_effects:
- self.piecewise_effects = self.add(
+ if self.parameters.piecewise_effects_of_investment:
+ self.piecewise_effects = self.add_submodels(
PiecewiseEffectsModel(
model=self._model,
label_of_element=self.label_of_element,
- piecewise_origin=(self.size.name, self.parameters.piecewise_effects.piecewise_origin),
- piecewise_shares=self.parameters.piecewise_effects.piecewise_shares,
- zero_point=self.is_invested,
- ),
- 'segments',
- )
- self.piecewise_effects.do_modeling()
-
- def _create_bounds_for_optional_investment(self):
- if self.parameters.fixed_size:
- # eq: investment_size = isInvested * fixed_size
- self.add(
- self._model.add_constraints(
- self.size == self.is_invested * self.parameters.fixed_size, name=f'{self.label_full}|is_invested'
- ),
- 'is_invested',
- )
-
- else:
- # eq1: P_invest <= isInvested * investSize_max
- self.add(
- self._model.add_constraints(
- self.size <= self.is_invested * self.parameters.maximum_size,
- name=f'{self.label_full}|is_invested_ub',
- ),
- 'is_invested_ub',
- )
-
- # eq2: P_invest >= isInvested * max(epsilon, investSize_min)
- self.add(
- self._model.add_constraints(
- self.size >= self.is_invested * np.maximum(CONFIG.Modeling.epsilon, self.parameters.minimum_size),
- name=f'{self.label_full}|is_invested_lb',
- ),
- 'is_invested_lb',
- )
-
- def _create_bounds_for_defining_variable(self):
- variable = self._defining_variable
- lb_relative, ub_relative = self._relative_bounds_of_defining_variable
- if np.all(lb_relative == ub_relative):
- self.add(
- self._model.add_constraints(
- variable == self.size * ub_relative, name=f'{self.label_full}|fix_{variable.name}'
- ),
- f'fix_{variable.name}',
- )
- if self._on_variable is not None:
- raise ValueError(
- f'Flow {self.label_full} has a fixed relative flow rate and an on_variable.'
- f'This combination is currently not supported.'
- )
- return
-
- # eq: defining_variable(t) <= size * upper_bound(t)
- self.add(
- self._model.add_constraints(
- variable <= self.size * ub_relative, name=f'{self.label_full}|ub_{variable.name}'
- ),
- f'ub_{variable.name}',
- )
-
- if self._on_variable is None:
- # eq: defining_variable(t) >= investment_size * relative_minimum(t)
- self.add(
- self._model.add_constraints(
- variable >= self.size * lb_relative, name=f'{self.label_full}|lb_{variable.name}'
- ),
- f'lb_{variable.name}',
- )
- else:
- ## 2. Gleichung: Minimum durch Investmentgröße und On
- # eq: defining_variable(t) >= mega * (On(t)-1) + size * relative_minimum(t)
- # ... mit mega = relative_maximum * maximum_size
- # äquivalent zu:.
- # eq: - defining_variable(t) + mega * On(t) + size * relative_minimum(t) <= + mega
- mega = lb_relative * self.parameters.maximum_size
- on = self._on_variable
- self.add(
- self._model.add_constraints(
- variable >= mega * (on - 1) + self.size * lb_relative, name=f'{self.label_full}|lb_{variable.name}'
- ),
- f'lb_{variable.name}',
- )
- # anmerkung: Glg bei Spezialfall relative_minimum = 0 redundant zu OnOff ??
-
-
-class StateModel(Model):
- """
- Handles basic on/off binary states for defining variables
- """
-
- def __init__(
- self,
- model: SystemModel,
- label_of_element: str,
- defining_variables: list[linopy.Variable],
- defining_bounds: list[tuple[NumericData, NumericData]],
- previous_values: list[NumericData | None] | None = None,
- use_off: bool = True,
- on_hours_total_min: NumericData | None = 0,
- on_hours_total_max: NumericData | None = None,
- effects_per_running_hour: dict[str, NumericData] | None = None,
- label: str | None = None,
- ):
- """
- Models binary state variables based on a continous variable.
-
- Args:
- model: The SystemModel that is used to create the model.
- label_of_element: The label of the parent (Element). Used to construct the full label of the model.
- defining_variables: List of Variables that are used to define the state
- defining_bounds: List of Tuples, defining the absolute bounds of each defining variable
- previous_values: List of previous values of the defining variables
- use_off: Whether to use the off state or not
- on_hours_total_min: min. overall sum of operating hours.
- on_hours_total_max: max. overall sum of operating hours.
- effects_per_running_hour: Costs per operating hours
- label: Label of the OnOffModel
- """
- super().__init__(model, label_of_element, label)
- assert len(defining_variables) == len(defining_bounds), 'Every defining Variable needs bounds to Model OnOff'
- self._defining_variables = defining_variables
- self._defining_bounds = defining_bounds
- self._previous_values = previous_values or []
- self._on_hours_total_min = on_hours_total_min if on_hours_total_min is not None else 0
- self._on_hours_total_max = on_hours_total_max if on_hours_total_max is not None else np.inf
- self._use_off = use_off
- self._effects_per_running_hour = effects_per_running_hour or {}
-
- self.on = None
- self.total_on_hours: linopy.Variable | None = None
- self.off = None
-
- def do_modeling(self):
- self.on = self.add(
- self._model.add_variables(
- name=f'{self.label_full}|on',
- binary=True,
- coords=self._model.coords,
- ),
- 'on',
- )
-
- self.total_on_hours = self.add(
- self._model.add_variables(
- lower=self._on_hours_total_min,
- upper=self._on_hours_total_max,
- coords=None,
- name=f'{self.label_full}|on_hours_total',
- ),
- 'on_hours_total',
- )
-
- self.add(
- self._model.add_constraints(
- self.total_on_hours == (self.on * self._model.hours_per_step).sum(),
- name=f'{self.label_full}|on_hours_total',
- ),
- 'on_hours_total',
- )
-
- # Add defining constraints for each variable
- self._add_defining_constraints()
-
- if self._use_off:
- self.off = self.add(
- self._model.add_variables(
- name=f'{self.label_full}|off',
- binary=True,
- coords=self._model.coords,
- ),
- 'off',
- )
-
- # Constraint: on + off = 1
- self.add(self._model.add_constraints(self.on + self.off == 1, name=f'{self.label_full}|off'), 'off')
-
- return self
-
- def _add_defining_constraints(self):
- """Add constraints that link defining variables to the on state"""
- nr_of_def_vars = len(self._defining_variables)
-
- if nr_of_def_vars == 1:
- # Case for a single defining variable
- def_var = self._defining_variables[0]
- lb, ub = self._defining_bounds[0]
-
- # Constraint: on * lower_bound <= def_var
- self.add(
- self._model.add_constraints(
- self.on * np.maximum(CONFIG.Modeling.epsilon, lb) <= def_var, name=f'{self.label_full}|on_con1'
- ),
- 'on_con1',
- )
-
- # Constraint: on * upper_bound >= def_var
- self.add(self._model.add_constraints(self.on * ub >= def_var, name=f'{self.label_full}|on_con2'), 'on_con2')
- else:
- # Case for multiple defining variables
- ub = sum(bound[1] for bound in self._defining_bounds) / nr_of_def_vars
- lb = CONFIG.Modeling.epsilon # TODO: Can this be a bigger value? (maybe the smallest bound?)
-
- # Constraint: on * epsilon <= sum(all_defining_variables)
- self.add(
- self._model.add_constraints(
- self.on * lb <= sum(self._defining_variables), name=f'{self.label_full}|on_con1'
+ label_of_model=f'{self.label_of_element}|PiecewiseEffects',
+ piecewise_origin=(self.size.name, self.parameters.piecewise_effects_of_investment.piecewise_origin),
+ piecewise_shares=self.parameters.piecewise_effects_of_investment.piecewise_shares,
+ zero_point=self.invested,
),
- 'on_con1',
+ short_name='segments',
)
- # Constraint to ensure all variables are zero when off.
- # Divide by nr_of_def_vars to improve numerical stability (smaller factors)
- self.add(
- self._model.add_constraints(
- self.on * ub >= sum([def_var / nr_of_def_vars for def_var in self._defining_variables]),
- name=f'{self.label_full}|on_con2',
- ),
- 'on_con2',
- )
-
- @property
- def previous_states(self) -> np.ndarray:
- """Computes the previous states {0, 1} of defining variables as a binary array from their previous values."""
- return StateModel.compute_previous_states(self._previous_values, epsilon=CONFIG.Modeling.epsilon)
-
@property
- def previous_on_states(self) -> np.ndarray:
- return self.previous_states
+ def size(self) -> linopy.Variable:
+ """Investment size variable"""
+ return self._variables['size']
@property
- def previous_off_states(self):
- return 1 - self.previous_states
+ def invested(self) -> linopy.Variable | None:
+ """Binary investment decision variable"""
+ if 'invested' not in self._variables:
+ return None
+ return self._variables['invested']
- @staticmethod
- def compute_previous_states(previous_values: list[NumericData | None] | None, epsilon: float = 1e-5) -> np.ndarray:
- """Computes the previous states {0, 1} of defining variables as a binary array from their previous values."""
- if not previous_values or all([val is None for val in previous_values]):
- return np.array([0])
- # Convert to 2D-array and compute binary on/off states
- previous_values = np.array([values for values in previous_values if values is not None]) # Filter out None
- if previous_values.ndim > 1:
- return np.any(~np.isclose(previous_values, 0, atol=epsilon), axis=0).astype(int)
-
- return (~np.isclose(previous_values, 0, atol=epsilon)).astype(int)
-
-
-class SwitchStateModel(Model):
- """
- Handles switch on/off transitions
- """
+class OnOffModel(Submodel):
+ """OnOff model using factory patterns"""
def __init__(
self,
- model: SystemModel,
+ model: FlowSystemModel,
label_of_element: str,
- state_variable: linopy.Variable,
- previous_state=0,
- switch_on_max: Scalar | None = None,
- label: str | None = None,
- ):
- super().__init__(model, label_of_element, label)
- self._state_variable = state_variable
- self.previous_state = previous_state
- self._switch_on_max = switch_on_max if switch_on_max is not None else np.inf
-
- self.switch_on = None
- self.switch_off = None
- self.switch_on_nr = None
-
- def do_modeling(self):
- """Create switch variables and constraints"""
-
- # Create switch variables
- self.switch_on = self.add(
- self._model.add_variables(binary=True, name=f'{self.label_full}|switch_on', coords=self._model.coords),
- 'switch_on',
- )
-
- self.switch_off = self.add(
- self._model.add_variables(binary=True, name=f'{self.label_full}|switch_off', coords=self._model.coords),
- 'switch_off',
- )
-
- # Create count variable for number of switches
- self.switch_on_nr = self.add(
- self._model.add_variables(
- upper=self._switch_on_max,
- lower=0,
- name=f'{self.label_full}|switch_on_nr',
- ),
- 'switch_on_nr',
- )
-
- # Add switch constraints for all entries after the first timestep
- self.add(
- self._model.add_constraints(
- self.switch_on.isel(time=slice(1, None)) - self.switch_off.isel(time=slice(1, None))
- == self._state_variable.isel(time=slice(1, None)) - self._state_variable.isel(time=slice(None, -1)),
- name=f'{self.label_full}|switch_con',
- ),
- 'switch_con',
- )
-
- # Initial switch constraint
- self.add(
- self._model.add_constraints(
- self.switch_on.isel(time=0) - self.switch_off.isel(time=0)
- == self._state_variable.isel(time=0) - self.previous_state,
- name=f'{self.label_full}|initial_switch_con',
- ),
- 'initial_switch_con',
- )
-
- # Mutual exclusivity constraint
- self.add(
- self._model.add_constraints(
- self.switch_on + self.switch_off <= 1.1, name=f'{self.label_full}|switch_on_or_off'
- ),
- 'switch_on_or_off',
- )
-
- # Total switch-on count constraint
- self.add(
- self._model.add_constraints(
- self.switch_on_nr == self.switch_on.sum('time'), name=f'{self.label_full}|switch_on_nr'
- ),
- 'switch_on_nr',
- )
-
- return self
-
-
-class ConsecutiveStateModel(Model):
- """
- Handles tracking consecutive durations in a state
- """
-
- def __init__(
- self,
- model: SystemModel,
- label_of_element: str,
- state_variable: linopy.Variable,
- minimum_duration: NumericData | None = None,
- maximum_duration: NumericData | None = None,
- previous_states: NumericData | None = None,
- label: str | None = None,
+ parameters: OnOffParameters,
+ on_variable: linopy.Variable,
+ previous_states: TemporalData | None,
+ label_of_model: str | None = None,
):
"""
- Model and constraint the consecutive duration of a state variable.
+ This feature model is used to model the on/off state of flow_rate(s). It does not matter of the flow_rates are
+ bounded by a size variable or by a hard bound. THe used bound here is the absolute highest/lowest bound!
Args:
- model: The SystemModel that is used to create the model.
+ model: The optimization model instance
label_of_element: The label of the parent (Element). Used to construct the full label of the model.
- state_variable: The state variable that is used to model the duration. state = {0, 1}
- minimum_duration: The minimum duration of the state variable.
- maximum_duration: The maximum duration of the state variable.
- previous_states: The previous states of the state variable.
- label: The label of the model. Used to construct the full label of the model.
+ parameters: The parameters of the feature model.
+ on_variable: The variable that determines the on state
+ previous_states: The previous flow_rates
+ label_of_model: The label of the model. This is needed to construct the full label of the model.
"""
- super().__init__(model, label_of_element, label)
- self._state_variable = state_variable
+ self.on = on_variable
self._previous_states = previous_states
- self._minimum_duration = minimum_duration
- self._maximum_duration = maximum_duration
-
- if isinstance(self._minimum_duration, TimeSeries):
- self._minimum_duration = self._minimum_duration.active_data
- if isinstance(self._maximum_duration, TimeSeries):
- self._maximum_duration = self._maximum_duration.active_data
-
- self.duration = None
-
- def do_modeling(self):
- """Create consecutive duration variables and constraints"""
- # Get the hours per step
- hours_per_step = self._model.hours_per_step
- mega = hours_per_step.sum('time') + self.previous_duration
-
- # Create the duration variable
- self.duration = self.add(
- self._model.add_variables(
- lower=0,
- upper=self._maximum_duration if self._maximum_duration is not None else mega,
- coords=self._model.coords,
- name=f'{self.label_full}|hours',
- ),
- 'hours',
- )
-
- # Add constraints
-
- # Upper bound constraint
- self.add(
- self._model.add_constraints(self.duration <= self._state_variable * mega, name=f'{self.label_full}|con1'),
- 'con1',
- )
-
- # Forward constraint
- self.add(
- self._model.add_constraints(
- self.duration.isel(time=slice(1, None))
- <= self.duration.isel(time=slice(None, -1)) + hours_per_step.isel(time=slice(None, -1)),
- name=f'{self.label_full}|con2a',
- ),
- 'con2a',
- )
-
- # Backward constraint
- self.add(
- self._model.add_constraints(
- self.duration.isel(time=slice(1, None))
- >= self.duration.isel(time=slice(None, -1))
- + hours_per_step.isel(time=slice(None, -1))
- + (self._state_variable.isel(time=slice(1, None)) - 1) * mega,
- name=f'{self.label_full}|con2b',
- ),
- 'con2b',
- )
-
- # Add minimum duration constraints if specified
- if self._minimum_duration is not None:
- self.add(
- self._model.add_constraints(
- self.duration
- >= (
- self._state_variable.isel(time=slice(None, -1)) - self._state_variable.isel(time=slice(1, None))
- )
- * self._minimum_duration.isel(time=slice(None, -1)),
- name=f'{self.label_full}|minimum',
- ),
- 'minimum',
- )
-
- # Handle initial condition
- if 0 < self.previous_duration < self._minimum_duration.isel(time=0):
- self.add(
- self._model.add_constraints(
- self._state_variable.isel(time=0) == 1, name=f'{self.label_full}|initial_minimum'
- ),
- 'initial_minimum',
- )
-
- # Set initial value
- self.add(
- self._model.add_constraints(
- self.duration.isel(time=0)
- == (hours_per_step.isel(time=0) + self.previous_duration) * self._state_variable.isel(time=0),
- name=f'{self.label_full}|initial',
- ),
- 'initial',
- )
-
- return self
-
- @property
- def previous_duration(self) -> Scalar:
- """Computes the previous duration of the state variable"""
- # TODO: Allow for other/dynamic timestep resolutions
- return ConsecutiveStateModel.compute_consecutive_hours_in_state(
- self._previous_states, self._model.hours_per_step.isel(time=0).item()
- )
-
- @staticmethod
- def compute_consecutive_hours_in_state(
- binary_values: NumericData, hours_per_timestep: int | float | np.ndarray
- ) -> Scalar:
- """
- Computes the final consecutive duration in state 'on' (=1) in hours, from a binary array.
-
- Args:
- binary_values: An int or 1D binary array containing only `0`s and `1`s.
- hours_per_timestep: The duration of each timestep in hours.
- If a scalar is provided, it is used for all timesteps.
- If an array is provided, it must be as long as the last consecutive duration in binary_values.
-
- Returns:
- The duration of the binary variable in hours.
-
- Raises
- ------
- TypeError
- If the length of binary_values and dt_in_hours is not equal, but None is a scalar.
- """
- if np.isscalar(binary_values) and np.isscalar(hours_per_timestep):
- return binary_values * hours_per_timestep
- elif np.isscalar(binary_values) and not np.isscalar(hours_per_timestep):
- return binary_values * hours_per_timestep[-1]
-
- if np.isclose(binary_values[-1], 0, atol=CONFIG.Modeling.epsilon):
- return 0
-
- if np.isscalar(hours_per_timestep):
- hours_per_timestep = np.ones(len(binary_values)) * hours_per_timestep
- hours_per_timestep: np.ndarray
-
- indexes_with_zero_values = np.where(np.isclose(binary_values, 0, atol=CONFIG.Modeling.epsilon))[0]
- if len(indexes_with_zero_values) == 0:
- nr_of_indexes_with_consecutive_ones = len(binary_values)
- else:
- nr_of_indexes_with_consecutive_ones = len(binary_values) - indexes_with_zero_values[-1] - 1
-
- if len(hours_per_timestep) < nr_of_indexes_with_consecutive_ones:
- raise ValueError(
- f'When trying to calculate the consecutive duration, the length of the last duration '
- f'({nr_of_indexes_with_consecutive_ones}) is longer than the provided hours_per_timestep ({len(hours_per_timestep)}), '
- f'as {binary_values=}'
- )
-
- return np.sum(
- binary_values[-nr_of_indexes_with_consecutive_ones:]
- * hours_per_timestep[-nr_of_indexes_with_consecutive_ones:]
- )
-
-
-class OnOffModel(Model):
- """
- Class for modeling the on and off state of a variable
- Uses component models to create a modular implementation
- """
-
- def __init__(
- self,
- model: SystemModel,
- on_off_parameters: OnOffParameters,
- label_of_element: str,
- defining_variables: list[linopy.Variable],
- defining_bounds: list[tuple[NumericData, NumericData]],
- previous_values: list[NumericData | None],
- label: str | None = None,
- ):
- """
- Constructor for OnOffModel
-
- Args:
- model: Reference to the SystemModel
- on_off_parameters: Parameters for the OnOffModel
- label_of_element: Label of the Parent
- defining_variables: List of Variables that are used to define the OnOffModel
- defining_bounds: List of Tuples, defining the absolute bounds of each defining variable
- previous_values: List of previous values of the defining variables
- label: Label of the OnOffModel
- """
- super().__init__(model, label_of_element, label)
- self.parameters = on_off_parameters
- self._defining_variables = defining_variables
- self._defining_bounds = defining_bounds
- self._previous_values = previous_values
-
- self.state_model = None
- self.switch_state_model = None
- self.consecutive_on_model = None
- self.consecutive_off_model = None
-
- def do_modeling(self):
- """Create all variables and constraints for the OnOffModel"""
-
- # Create binary state component
- self.state_model = StateModel(
- model=self._model,
- label_of_element=self.label_of_element,
- defining_variables=self._defining_variables,
- defining_bounds=self._defining_bounds,
- previous_values=self._previous_values,
- use_off=self.parameters.use_off,
- on_hours_total_min=self.parameters.on_hours_total_min,
- on_hours_total_max=self.parameters.on_hours_total_max,
- effects_per_running_hour=self.parameters.effects_per_running_hour,
+ self.parameters = parameters
+ super().__init__(model, label_of_element, label_of_model=label_of_model)
+
+ def _do_modeling(self):
+ super()._do_modeling()
+
+ if self.parameters.use_off:
+ off = self.add_variables(binary=True, short_name='off', coords=self._model.get_coords())
+ self.add_constraints(self.on + off == 1, short_name='complementary')
+
+ # 3. Total duration tracking using existing pattern
+ ModelingPrimitives.expression_tracking_variable(
+ self,
+ tracked_expression=(self.on * self._model.hours_per_step).sum('time'),
+ bounds=(
+ self.parameters.on_hours_total_min if self.parameters.on_hours_total_min is not None else 0,
+ self.parameters.on_hours_total_max if self.parameters.on_hours_total_max is not None else np.inf,
+ ), # TODO: self._model.hours_per_step.sum('time').item() + self._get_previous_on_duration())
+ short_name='on_hours_total',
+ coords=['period', 'scenario'],
)
- self.add(self.state_model)
- self.state_model.do_modeling()
- # Create switch component if needed
+ # 4. Switch tracking using existing pattern
if self.parameters.use_switch_on:
- self.switch_state_model = SwitchStateModel(
- model=self._model,
- label_of_element=self.label_of_element,
- state_variable=self.state_model.on,
- previous_state=self.state_model.previous_on_states[-1],
- switch_on_max=self.parameters.switch_on_total_max,
- )
- self.add(self.switch_state_model)
- self.switch_state_model.do_modeling()
+ self.add_variables(binary=True, short_name='switch|on', coords=self.get_coords())
+ self.add_variables(binary=True, short_name='switch|off', coords=self.get_coords())
+
+ BoundingPatterns.state_transition_bounds(
+ self,
+ state_variable=self.on,
+ switch_on=self.switch_on,
+ switch_off=self.switch_off,
+ name=f'{self.label_of_model}|switch',
+ previous_state=self._previous_states.isel(time=-1) if self._previous_states is not None else 0,
+ coord='time',
+ )
+
+ if self.parameters.switch_on_total_max is not None:
+ count = self.add_variables(
+ lower=0,
+ upper=self.parameters.switch_on_total_max,
+ coords=self._model.get_coords(('period', 'scenario')),
+ short_name='switch|count',
+ )
+ self.add_constraints(count == self.switch_on.sum('time'), short_name='switch|count')
- # Create consecutive on hours component if needed
+ # 5. Consecutive on duration using existing pattern
if self.parameters.use_consecutive_on_hours:
- self.consecutive_on_model = ConsecutiveStateModel(
- model=self._model,
- label_of_element=self.label_of_element,
- state_variable=self.state_model.on,
+ ModelingPrimitives.consecutive_duration_tracking(
+ self,
+ state_variable=self.on,
+ short_name='consecutive_on_hours',
minimum_duration=self.parameters.consecutive_on_hours_min,
maximum_duration=self.parameters.consecutive_on_hours_max,
- previous_states=self.state_model.previous_on_states,
- label='ConsecutiveOn',
+ duration_per_step=self.hours_per_step,
+ duration_dim='time',
+ previous_duration=self._get_previous_on_duration(),
)
- self.add(self.consecutive_on_model)
- self.consecutive_on_model.do_modeling()
- # Create consecutive off hours component if needed
+ # 6. Consecutive off duration using existing pattern
if self.parameters.use_consecutive_off_hours:
- self.consecutive_off_model = ConsecutiveStateModel(
- model=self._model,
- label_of_element=self.label_of_element,
- state_variable=self.state_model.off,
+ ModelingPrimitives.consecutive_duration_tracking(
+ self,
+ state_variable=self.off,
+ short_name='consecutive_off_hours',
minimum_duration=self.parameters.consecutive_off_hours_min,
maximum_duration=self.parameters.consecutive_off_hours_max,
- previous_states=self.state_model.previous_off_states,
- label='ConsecutiveOff',
+ duration_per_step=self.hours_per_step,
+ duration_dim='time',
+ previous_duration=self._get_previous_off_duration(),
)
- self.add(self.consecutive_off_model)
- self.consecutive_off_model.do_modeling()
+ # TODO:
- self._create_shares()
+ self._add_effects()
- def _create_shares(self):
+ def _add_effects(self):
+ """Add operational effects"""
if self.parameters.effects_per_running_hour:
self._model.effects.add_share_to_effects(
name=self.label_of_element,
expressions={
- effect: self.state_model.on * factor * self._model.hours_per_step
+ effect: self.on * factor * self._model.hours_per_step
for effect, factor in self.parameters.effects_per_running_hour.items()
},
- target='operation',
+ target='temporal',
)
if self.parameters.effects_per_switch_on:
self._model.effects.add_share_to_effects(
name=self.label_of_element,
expressions={
- effect: self.switch_state_model.switch_on * factor
- for effect, factor in self.parameters.effects_per_switch_on.items()
+ effect: self.switch_on * factor for effect, factor in self.parameters.effects_per_switch_on.items()
},
- target='operation',
+ target='temporal',
)
+ # Properties access variables from Submodel's tracking system
+
@property
- def on(self):
- return self.state_model.on
+ def on_hours_total(self) -> linopy.Variable:
+ """Total on hours variable"""
+ return self['on_hours_total']
@property
- def off(self):
- return self.state_model.off
+ def off(self) -> linopy.Variable | None:
+ """Binary off state variable"""
+ return self.get('off')
@property
- def switch_on(self):
- return self.switch_state_model.switch_on
+ def switch_on(self) -> linopy.Variable | None:
+ """Switch on variable"""
+ return self.get('switch|on')
@property
- def switch_off(self):
- return self.switch_state_model.switch_off
+ def switch_off(self) -> linopy.Variable | None:
+ """Switch off variable"""
+ return self.get('switch|off')
@property
- def switch_on_nr(self):
- return self.switch_state_model.switch_on_nr
+ def switch_on_nr(self) -> linopy.Variable | None:
+ """Number of switch-ons variable"""
+ return self.get('switch|count')
@property
- def consecutive_on_hours(self):
- return self.consecutive_on_model.duration
+ def consecutive_on_hours(self) -> linopy.Variable | None:
+ """Consecutive on hours variable"""
+ return self.get('consecutive_on_hours')
@property
- def consecutive_off_hours(self):
- return self.consecutive_off_model.duration
+ def consecutive_off_hours(self) -> linopy.Variable | None:
+ """Consecutive off hours variable"""
+ return self.get('consecutive_off_hours')
+
+ def _get_previous_on_duration(self):
+ """Get previous on duration. Previously OFF by default, for one timestep"""
+ hours_per_step = self._model.hours_per_step.isel(time=0).min().item()
+ if self._previous_states is None:
+ return 0
+ else:
+ return ModelingUtilities.compute_consecutive_hours_in_state(self._previous_states, hours_per_step)
+
+ def _get_previous_off_duration(self):
+ """Get previous off duration. Previously OFF by default, for one timestep"""
+ hours_per_step = self._model.hours_per_step.isel(time=0).min().item()
+ if self._previous_states is None:
+ return hours_per_step
+ else:
+ return ModelingUtilities.compute_consecutive_hours_in_state(self._previous_states * -1 + 1, hours_per_step)
-class PieceModel(Model):
+class PieceModel(Submodel):
"""Class for modeling a linear piece of one or more variables in parallel"""
def __init__(
self,
- model: SystemModel,
+ model: FlowSystemModel,
label_of_element: str,
- label: str,
- as_time_series: bool = True,
+ label_of_model: str,
+ dims: FlowSystemDimensions | None,
):
- super().__init__(model, label_of_element, label)
self.inside_piece: linopy.Variable | None = None
self.lambda0: linopy.Variable | None = None
self.lambda1: linopy.Variable | None = None
- self._as_time_series = as_time_series
+ self.dims = dims
- def do_modeling(self):
- self.inside_piece = self.add(
- self._model.add_variables(
- binary=True,
- name=f'{self.label_full}|inside_piece',
- coords=self._model.coords if self._as_time_series else None,
- ),
- 'inside_piece',
- )
+ super().__init__(model, label_of_element, label_of_model)
- self.lambda0 = self.add(
- self._model.add_variables(
- lower=0,
- upper=1,
- name=f'{self.label_full}|lambda0',
- coords=self._model.coords if self._as_time_series else None,
- ),
- 'lambda0',
+ def _do_modeling(self):
+ super()._do_modeling()
+ self.inside_piece = self.add_variables(
+ binary=True,
+ short_name='inside_piece',
+ coords=self._model.get_coords(dims=self.dims),
+ )
+ self.lambda0 = self.add_variables(
+ lower=0,
+ upper=1,
+ short_name='lambda0',
+ coords=self._model.get_coords(dims=self.dims),
)
- self.lambda1 = self.add(
- self._model.add_variables(
- lower=0,
- upper=1,
- name=f'{self.label_full}|lambda1',
- coords=self._model.coords if self._as_time_series else None,
- ),
- 'lambda1',
+ self.lambda1 = self.add_variables(
+ lower=0,
+ upper=1,
+ short_name='lambda1',
+ coords=self._model.get_coords(dims=self.dims),
)
# eq: lambda0(t) + lambda1(t) = inside_piece(t)
- self.add(
- self._model.add_constraints(
- self.inside_piece == self.lambda0 + self.lambda1, name=f'{self.label_full}|inside_piece'
- ),
- 'inside_piece',
- )
+ self.add_constraints(self.inside_piece == self.lambda0 + self.lambda1, short_name='inside_piece')
-class PiecewiseModel(Model):
+class PiecewiseModel(Submodel):
def __init__(
self,
- model: SystemModel,
+ model: FlowSystemModel,
label_of_element: str,
+ label_of_model: str,
piecewise_variables: dict[str, Piecewise],
zero_point: bool | linopy.Variable | None,
- as_time_series: bool,
- label: str = '',
+ dims: FlowSystemDimensions | None,
):
"""
Modeling a Piecewise relation between miultiple variables.
@@ -849,50 +378,54 @@ def __init__(
Each Piece is a tuple of (start, end).
Args:
- model: The SystemModel that is used to create the model.
+ model: The FlowSystemModel that is used to create the model.
label_of_element: The label of the parent (Element). Used to construct the full label of the model.
- label: The label of the model. Used to construct the full label of the model.
+ label_of_model: The label of the model. Used to construct the full label of the model.
piecewise_variables: The variables to which the Pieces are assigned.
zero_point: A variable that can be used to define a zero point for the Piecewise relation. If None or False, no zero point is defined.
- as_time_series: Whether the Piecewise relation is defined for a TimeSeries or a single variable.
+ dims: The dimensions used for variable creation. If None, all dimensions are used.
"""
- super().__init__(model, label_of_element, label)
self._piecewise_variables = piecewise_variables
self._zero_point = zero_point
- self._as_time_series = as_time_series
+ self.dims = dims
self.pieces: list[PieceModel] = []
self.zero_point: linopy.Variable | None = None
+ super().__init__(model, label_of_element=label_of_element, label_of_model=label_of_model)
+
+ def _do_modeling(self):
+ super()._do_modeling()
+ # Validate all piecewise variables have the same number of segments
+ segment_counts = [len(pw) for pw in self._piecewise_variables.values()]
+ if not all(count == segment_counts[0] for count in segment_counts):
+ raise ValueError(f'All piecewises must have the same number of pieces, got {segment_counts}')
- def do_modeling(self):
for i in range(len(list(self._piecewise_variables.values())[0])):
- new_piece = self.add(
+ new_piece = self.add_submodels(
PieceModel(
model=self._model,
label_of_element=self.label_of_element,
- label=f'Piece_{i}',
- as_time_series=self._as_time_series,
- )
+ label_of_model=f'{self.label_of_element}|Piece_{i}',
+ dims=self.dims,
+ ),
+ short_name=f'Piece_{i}',
)
self.pieces.append(new_piece)
- new_piece.do_modeling()
for var_name in self._piecewise_variables:
variable = self._model.variables[var_name]
- self.add(
- self._model.add_constraints(
- variable
- == sum(
- [
- piece_model.lambda0 * piece_bounds.start + piece_model.lambda1 * piece_bounds.end
- for piece_model, piece_bounds in zip(
- self.pieces, self._piecewise_variables[var_name], strict=False
- )
- ]
- ),
- name=f'{self.label_full}|{var_name}|lambda',
+ self.add_constraints(
+ variable
+ == sum(
+ [
+ piece_model.lambda0 * piece_bounds.start + piece_model.lambda1 * piece_bounds.end
+ for piece_model, piece_bounds in zip(
+ self.pieces, self._piecewise_variables[var_name], strict=False
+ )
+ ]
),
- f'{var_name}|lambda',
+ name=f'{self.label_full}|{var_name}|lambda',
+ short_name=f'{var_name}|lambda',
)
# a) eq: Segment1.onSeg(t) + Segment2.onSeg(t) + ... = 1 Aufenthalt nur in Segmenten erlaubt
@@ -901,43 +434,98 @@ def do_modeling(self):
self.zero_point = self._zero_point
rhs = self.zero_point
elif self._zero_point is True:
- self.zero_point = self.add(
- self._model.add_variables(
- coords=self._model.coords, binary=True, name=f'{self.label_full}|zero_point'
- ),
- 'zero_point',
+ self.zero_point = self.add_variables(
+ coords=self._model.get_coords(self.dims),
+ binary=True,
+ short_name='zero_point',
)
rhs = self.zero_point
else:
rhs = 1
- self.add(
- self._model.add_constraints(
- sum([piece.inside_piece for piece in self.pieces]) <= rhs,
- name=f'{self.label_full}|{variable.name}|single_segment',
- ),
- f'{var_name}|single_segment',
+ self.add_constraints(
+ sum([piece.inside_piece for piece in self.pieces]) <= rhs,
+ name=f'{self.label_full}|{variable.name}|single_segment',
+ short_name=f'{var_name}|single_segment',
)
-class ShareAllocationModel(Model):
+class PiecewiseEffectsModel(Submodel):
+ def __init__(
+ self,
+ model: FlowSystemModel,
+ label_of_element: str,
+ label_of_model: str,
+ piecewise_origin: tuple[str, Piecewise],
+ piecewise_shares: dict[str, Piecewise],
+ zero_point: bool | linopy.Variable | None,
+ ):
+ origin_count = len(piecewise_origin[1])
+ share_counts = [len(pw) for pw in piecewise_shares.values()]
+ if not all(count == origin_count for count in share_counts):
+ raise ValueError(
+ f'Piece count mismatch: piecewise_origin has {origin_count} segments, '
+ f'but piecewise_shares have {share_counts}'
+ )
+ self._zero_point = zero_point
+ self._piecewise_origin = piecewise_origin
+ self._piecewise_shares = piecewise_shares
+ self.shares: dict[str, linopy.Variable] = {}
+
+ self.piecewise_model: PiecewiseModel | None = None
+
+ super().__init__(model, label_of_element=label_of_element, label_of_model=label_of_model)
+
+ def _do_modeling(self):
+ self.shares = {
+ effect: self.add_variables(coords=self._model.get_coords(['period', 'scenario']), short_name=effect)
+ for effect in self._piecewise_shares
+ }
+
+ piecewise_variables = {
+ self._piecewise_origin[0]: self._piecewise_origin[1],
+ **{
+ self.shares[effect_label].name: self._piecewise_shares[effect_label]
+ for effect_label in self._piecewise_shares
+ },
+ }
+
+ self.piecewise_model = self.add_submodels(
+ PiecewiseModel(
+ model=self._model,
+ label_of_element=self.label_of_element,
+ piecewise_variables=piecewise_variables,
+ zero_point=self._zero_point,
+ dims=('period', 'scenario'),
+ label_of_model=f'{self.label_of_element}|PiecewiseEffects',
+ ),
+ short_name='PiecewiseEffects',
+ )
+
+ # Shares
+ self._model.effects.add_share_to_effects(
+ name=self.label_of_element,
+ expressions={effect: variable * 1 for effect, variable in self.shares.items()},
+ target='periodic',
+ )
+
+
+class ShareAllocationModel(Submodel):
def __init__(
self,
- model: SystemModel,
- shares_are_time_series: bool,
+ model: FlowSystemModel,
+ dims: list[FlowSystemDimensions],
label_of_element: str | None = None,
- label: str | None = None,
- label_full: str | None = None,
+ label_of_model: str | None = None,
total_max: Scalar | None = None,
total_min: Scalar | None = None,
- max_per_hour: NumericData | None = None,
- min_per_hour: NumericData | None = None,
+ max_per_hour: TemporalData | None = None,
+ min_per_hour: TemporalData | None = None,
):
- super().__init__(model, label_of_element=label_of_element, label=label, label_full=label_full)
- if not shares_are_time_series: # If the condition is True
- assert max_per_hour is None and min_per_hour is None, (
- 'Both max_per_hour and min_per_hour cannot be used when shares_are_time_series is False'
- )
+ if 'time' not in dims and (max_per_hour is not None or min_per_hour is not None):
+ raise ValueError('Both max_per_hour and min_per_hour cannot be used when has_time_dim is False')
+
+ self._dims = dims
self.total_per_timestep: linopy.Variable | None = None
self.total: linopy.Variable | None = None
self.shares: dict[str, linopy.Variable] = {}
@@ -947,51 +535,43 @@ def __init__(
self._eq_total: linopy.Constraint | None = None
# Parameters
- self._shares_are_time_series = shares_are_time_series
self._total_max = total_max if total_max is not None else np.inf
self._total_min = total_min if total_min is not None else -np.inf
self._max_per_hour = max_per_hour if max_per_hour is not None else np.inf
self._min_per_hour = min_per_hour if min_per_hour is not None else -np.inf
- def do_modeling(self):
- self.total = self.add(
- self._model.add_variables(
- lower=self._total_min, upper=self._total_max, coords=None, name=f'{self.label_full}|total'
- ),
- 'total',
+ super().__init__(model, label_of_element=label_of_element, label_of_model=label_of_model)
+
+ def _do_modeling(self):
+ super()._do_modeling()
+ self.total = self.add_variables(
+ lower=self._total_min,
+ upper=self._total_max,
+ coords=self._model.get_coords([dim for dim in self._dims if dim != 'time']),
+ name=self.label_full,
+ short_name='total',
)
# eq: sum = sum(share_i) # skalar
- self._eq_total = self.add(
- self._model.add_constraints(self.total == 0, name=f'{self.label_full}|total'), 'total'
- )
+ self._eq_total = self.add_constraints(self.total == 0, name=self.label_full)
- if self._shares_are_time_series:
- self.total_per_timestep = self.add(
- self._model.add_variables(
- lower=-np.inf
- if (self._min_per_hour is None)
- else np.multiply(self._min_per_hour, self._model.hours_per_step),
- upper=np.inf
- if (self._max_per_hour is None)
- else np.multiply(self._max_per_hour, self._model.hours_per_step),
- coords=self._model.coords,
- name=f'{self.label_full}|total_per_timestep',
- ),
- 'total_per_timestep',
+ if 'time' in self._dims:
+ self.total_per_timestep = self.add_variables(
+ lower=-np.inf if (self._min_per_hour is None) else self._min_per_hour * self._model.hours_per_step,
+ upper=np.inf if (self._max_per_hour is None) else self._max_per_hour * self._model.hours_per_step,
+ coords=self._model.get_coords(self._dims),
+ short_name='per_timestep',
)
- self._eq_total_per_timestep = self.add(
- self._model.add_constraints(self.total_per_timestep == 0, name=f'{self.label_full}|total_per_timestep'),
- 'total_per_timestep',
- )
+ self._eq_total_per_timestep = self.add_constraints(self.total_per_timestep == 0, short_name='per_timestep')
# Add it to the total
- self._eq_total.lhs -= self.total_per_timestep.sum()
+ self._eq_total.lhs -= self.total_per_timestep.sum(dim='time')
def add_share(
self,
name: str,
expression: linopy.LinearExpression,
+ dims: list[FlowSystemDimensions] | None = None,
):
"""
Add a share to the share allocation model. If the share already exists, the expression is added to the existing share.
@@ -1002,124 +582,32 @@ def add_share(
Args:
name: The name of the share.
expression: The expression of the share. Added to the right hand side of the constraint.
+ dims: The dimensions of the share. Defaults to all dimensions. Dims are ordered automatically
"""
+ if dims is None:
+ dims = self._dims
+ else:
+ if 'time' in dims and 'time' not in self._dims:
+ raise ValueError('Cannot add share with time-dim to a model without time-dim')
+ if 'period' in dims and 'period' not in self._dims:
+ raise ValueError('Cannot add share with period-dim to a model without period-dim')
+ if 'scenario' in dims and 'scenario' not in self._dims:
+ raise ValueError('Cannot add share with scenario-dim to a model without scenario-dim')
+
if name in self.shares:
self.share_constraints[name].lhs -= expression
else:
- self.shares[name] = self.add(
- self._model.add_variables(
- coords=None
- if isinstance(expression, linopy.LinearExpression)
- and expression.ndim == 0
- or not isinstance(expression, linopy.LinearExpression)
- else self._model.coords,
- name=f'{name}->{self.label_full}',
- ),
- name,
+ self.shares[name] = self.add_variables(
+ coords=self._model.get_coords(dims),
+ name=f'{name}->{self.label_full}',
+ short_name=name,
)
- self.share_constraints[name] = self.add(
- self._model.add_constraints(self.shares[name] == expression, name=f'{name}->{self.label_full}'), name
+
+ self.share_constraints[name] = self.add_constraints(
+ self.shares[name] == expression, name=f'{name}->{self.label_full}'
)
- if self.shares[name].ndim == 0:
+
+ if 'time' not in dims:
self._eq_total.lhs -= self.shares[name]
else:
self._eq_total_per_timestep.lhs -= self.shares[name]
-
-
-class PiecewiseEffectsModel(Model):
- def __init__(
- self,
- model: SystemModel,
- label_of_element: str,
- piecewise_origin: tuple[str, Piecewise],
- piecewise_shares: dict[str, Piecewise],
- zero_point: bool | linopy.Variable | None,
- label: str = 'PiecewiseEffects',
- ):
- super().__init__(model, label_of_element, label)
- assert len(piecewise_origin[1]) == len(list(piecewise_shares.values())[0]), (
- 'Piece length of variable_segments and share_segments must be equal'
- )
- self._zero_point = zero_point
- self._piecewise_origin = piecewise_origin
- self._piecewise_shares = piecewise_shares
- self.shares: dict[str, linopy.Variable] = {}
-
- self.piecewise_model: PiecewiseModel | None = None
-
- def do_modeling(self):
- self.shares = {
- effect: self.add(self._model.add_variables(coords=None, name=f'{self.label_full}|{effect}'), f'{effect}')
- for effect in self._piecewise_shares
- }
-
- piecewise_variables = {
- self._piecewise_origin[0]: self._piecewise_origin[1],
- **{
- self.shares[effect_label].name: self._piecewise_shares[effect_label]
- for effect_label in self._piecewise_shares
- },
- }
-
- self.piecewise_model = self.add(
- PiecewiseModel(
- model=self._model,
- label_of_element=self.label_of_element,
- piecewise_variables=piecewise_variables,
- zero_point=self._zero_point,
- as_time_series=False,
- label='PiecewiseEffects',
- )
- )
-
- self.piecewise_model.do_modeling()
-
- # Shares
- self._model.effects.add_share_to_effects(
- name=self.label_of_element,
- expressions={effect: variable * 1 for effect, variable in self.shares.items()},
- target='invest',
- )
-
-
-class PreventSimultaneousUsageModel(Model):
- """
- Prevents multiple Multiple Binary variables from being 1 at the same time
-
- Only 'classic type is modeled for now (# "classic" -> alle Flows brauchen Binärvariable:)
- In 'new', the binary Variables need to be forced beforehand, which is not that straight forward... --> TODO maybe
-
-
- # "new":
- # eq: flow_1.on(t) + flow_2.on(t) + .. + flow_i.val(t)/flow_i.max <= 1 (1 Flow ohne Binärvariable!)
-
- # Anmerkung: Patrick Schönfeld (oemof, custom/link.py) macht bei 2 Flows ohne Binärvariable dies:
- # 1) bin + flow1/flow1_max <= 1
- # 2) bin - flow2/flow2_max >= 0
- # 3) geht nur, wenn alle flow.min >= 0
- # --> könnte man auch umsetzen (statt force_on_variable() für die Flows, aber sollte aufs selbe wie "new" kommen)
- """
-
- def __init__(
- self,
- model: SystemModel,
- variables: list[linopy.Variable],
- label_of_element: str,
- label: str = 'PreventSimultaneousUsage',
- ):
- super().__init__(model, label_of_element, label)
- self._simultanious_use_variables = variables
- assert len(self._simultanious_use_variables) >= 2, (
- f'Model {self.__class__.__name__} must get at least two variables'
- )
- for variable in self._simultanious_use_variables: # classic
- assert variable.attrs['binary'], f'Variable {variable} must be binary for use in {self.__class__.__name__}'
-
- def do_modeling(self):
- # eq: sum(flow_i.on(t)) <= 1.1 (1 wird etwas größer gewählt wg. Binärvariablengenauigkeit)
- self.add(
- self._model.add_constraints(
- sum(self._simultanious_use_variables) <= 1.1, name=f'{self.label_full}|prevent_simultaneous_use'
- ),
- 'prevent_simultaneous_use',
- )
diff --git a/flixopt/flow_system.py b/flixopt/flow_system.py
index 604b1ca1e..ad43c183b 100644
--- a/flixopt/flow_system.py
+++ b/flixopt/flow_system.py
@@ -7,30 +7,43 @@
import json
import logging
import warnings
-from io import StringIO
-from typing import TYPE_CHECKING, Literal
+from typing import TYPE_CHECKING, Any, Literal, Optional
+import numpy as np
import pandas as pd
-from rich.console import Console
-from rich.pretty import Pretty
-
-from . import io as fx_io
-from .core import NumericData, TimeSeries, TimeSeriesCollection, TimeSeriesData
-from .effects import Effect, EffectCollection, EffectTimeSeries, EffectValuesDict, EffectValuesUser
+import xarray as xr
+
+from .core import (
+ ConversionError,
+ DataConverter,
+ FlowSystemDimensions,
+ PeriodicData,
+ PeriodicDataUser,
+ TemporalData,
+ TemporalDataUser,
+ TimeSeriesData,
+)
+from .effects import (
+ Effect,
+ EffectCollection,
+ PeriodicEffects,
+ PeriodicEffectsUser,
+ TemporalEffects,
+ TemporalEffectsUser,
+)
from .elements import Bus, Component, Flow
-from .structure import CLASS_REGISTRY, Element, SystemModel
+from .structure import Element, FlowSystemModel, Interface
if TYPE_CHECKING:
import pathlib
+ from collections.abc import Collection
- import numpy as np
import pyvis
- import xarray as xr
logger = logging.getLogger('flixopt')
-class FlowSystem:
+class FlowSystem(Interface):
"""
A FlowSystem organizes the high level Elements (Components, Buses & Effects).
@@ -38,106 +51,391 @@ class FlowSystem:
Args:
timesteps: The timesteps of the model.
+ periods: The periods of the model.
+ scenarios: The scenarios of the model.
hours_of_last_timestep: The duration of the last time step. Uses the last time interval if not specified
hours_of_previous_timesteps: The duration of previous timesteps.
If None, the first time increment of time_series is used.
This is needed to calculate previous durations (for example consecutive_on_hours).
If you use an array, take care that its long enough to cover all previous values!
+ weights: The weights of each period and scenario. If None, all scenarios have the same weight (normalized to 1).
+ Its recommended to normalize the weights to sum up to 1.
+ scenario_independent_sizes: Controls whether investment sizes are equalized across scenarios.
+ - True: All sizes are shared/equalized across scenarios
+ - False: All sizes are optimized separately per scenario
+ - list[str]: Only specified components (by label_full) are equalized across scenarios
+ scenario_independent_flow_rates: Controls whether flow rates are equalized across scenarios.
+ - True: All flow rates are shared/equalized across scenarios
+ - False: All flow rates are optimized separately per scenario
+ - list[str]: Only specified flows (by label_full) are equalized across scenarios
Notes:
- Creates an empty registry for components and buses, an empty EffectCollection, and a placeholder for a SystemModel.
- - The instance starts disconnected (self._connected == False) and will be connected automatically when trying to solve a calculation.
+ - The instance starts disconnected (self._connected_and_transformed == False) and will be
+ connected_and_transformed automatically when trying to solve a calculation.
"""
def __init__(
self,
timesteps: pd.DatetimeIndex,
+ periods: pd.Index | None = None,
+ scenarios: pd.Index | None = None,
hours_of_last_timestep: float | None = None,
hours_of_previous_timesteps: int | float | np.ndarray | None = None,
+ weights: PeriodicDataUser | None = None,
+ scenario_independent_sizes: bool | list[str] = True,
+ scenario_independent_flow_rates: bool | list[str] = False,
):
- """
- Initialize a FlowSystem that manages components, buses, effects, and their time-series.
+ self.timesteps = self._validate_timesteps(timesteps)
+ self.timesteps_extra = self._create_timesteps_with_extra(self.timesteps, hours_of_last_timestep)
+ self.hours_of_previous_timesteps = self._calculate_hours_of_previous_timesteps(
+ self.timesteps, hours_of_previous_timesteps
+ )
- Parameters:
- timesteps: DatetimeIndex defining the primary timesteps for the system's TimeSeriesCollection.
- hours_of_last_timestep: Duration (in hours) of the final timestep; if None, inferred from timesteps or defaults in TimeSeriesCollection.
- hours_of_previous_timesteps: Scalar or array-like durations (in hours) for the preceding timesteps; used to configure non-uniform timestep lengths.
+ self.periods = None if periods is None else self._validate_periods(periods)
+ self.scenarios = None if scenarios is None else self._validate_scenarios(scenarios)
- Notes:
- Creates an empty registry for components and buses, an empty EffectCollection, and a placeholder for a SystemModel.
- The instance starts disconnected (self._connected == False) and with no active network visualization app.
- This can also be triggered manually with `_connect_network()`.
- """
- self.time_series_collection = TimeSeriesCollection(
- timesteps=timesteps,
- hours_of_last_timestep=hours_of_last_timestep,
- hours_of_previous_timesteps=hours_of_previous_timesteps,
- )
+ self.weights = weights
+
+ hours_per_timestep = self.calculate_hours_per_timestep(self.timesteps_extra)
+
+ self.hours_of_last_timestep = hours_per_timestep[-1].item()
- # defaults:
+ self.hours_per_timestep = self.fit_to_model_coords('hours_per_timestep', hours_per_timestep)
+
+ # Element collections
self.components: dict[str, Component] = {}
self.buses: dict[str, Bus] = {}
self.effects: EffectCollection = EffectCollection()
- self.model: SystemModel | None = None
+ self.model: FlowSystemModel | None = None
- self._connected = False
+ self._connected_and_transformed = False
+ self._used_in_calculation = False
self._network_app = None
- @classmethod
- def from_dataset(cls, ds: xr.Dataset):
- timesteps_extra = pd.DatetimeIndex(ds.attrs['timesteps_extra'], name='time')
- hours_of_last_timestep = TimeSeriesCollection.calculate_hours_per_timestep(timesteps_extra).isel(time=-1).item()
-
- flow_system = FlowSystem(
- timesteps=timesteps_extra[:-1],
- hours_of_last_timestep=hours_of_last_timestep,
- hours_of_previous_timesteps=ds.attrs['hours_of_previous_timesteps'],
+ # Use properties to validate and store scenario dimension settings
+ self.scenario_independent_sizes = scenario_independent_sizes
+ self.scenario_independent_flow_rates = scenario_independent_flow_rates
+
+ @staticmethod
+ def _validate_timesteps(timesteps: pd.DatetimeIndex) -> pd.DatetimeIndex:
+ """Validate timesteps format and rename if needed."""
+ if not isinstance(timesteps, pd.DatetimeIndex):
+ raise TypeError('timesteps must be a pandas DatetimeIndex')
+ if len(timesteps) < 2:
+ raise ValueError('timesteps must contain at least 2 timestamps')
+ if timesteps.name != 'time':
+ timesteps.name = 'time'
+ if not timesteps.is_monotonic_increasing:
+ raise ValueError('timesteps must be sorted')
+ return timesteps
+
+ @staticmethod
+ def _validate_scenarios(scenarios: pd.Index) -> pd.Index:
+ """
+ Validate and prepare scenario index.
+
+ Args:
+ scenarios: The scenario index to validate
+ """
+ if not isinstance(scenarios, pd.Index) or len(scenarios) == 0:
+ raise ConversionError('Scenarios must be a non-empty Index')
+
+ if scenarios.name != 'scenario':
+ scenarios = scenarios.rename('scenario')
+
+ return scenarios
+
+ @staticmethod
+ def _validate_periods(periods: pd.Index) -> pd.Index:
+ """
+ Validate and prepare period index.
+
+ Args:
+ periods: The period index to validate
+ """
+ if not isinstance(periods, pd.Index) or len(periods) == 0:
+ raise ConversionError(f'Periods must be a non-empty Index. Got {periods}')
+
+ if not (
+ periods.dtype.kind == 'i' # integer dtype
+ and periods.is_monotonic_increasing # rising
+ and periods.is_unique
+ ):
+ raise ConversionError(f'Periods must be a monotonically increasing and unique Index. Got {periods}')
+
+ if periods.name != 'period':
+ periods = periods.rename('period')
+
+ return periods
+
+ @staticmethod
+ def _create_timesteps_with_extra(
+ timesteps: pd.DatetimeIndex, hours_of_last_timestep: float | None
+ ) -> pd.DatetimeIndex:
+ """Create timesteps with an extra step at the end."""
+ if hours_of_last_timestep is None:
+ hours_of_last_timestep = (timesteps[-1] - timesteps[-2]) / pd.Timedelta(hours=1)
+
+ last_date = pd.DatetimeIndex([timesteps[-1] + pd.Timedelta(hours=hours_of_last_timestep)], name='time')
+ return pd.DatetimeIndex(timesteps.append(last_date), name='time')
+
+ @staticmethod
+ def calculate_hours_per_timestep(timesteps_extra: pd.DatetimeIndex) -> xr.DataArray:
+ """Calculate duration of each timestep as a 1D DataArray."""
+ hours_per_step = np.diff(timesteps_extra) / pd.Timedelta(hours=1)
+ return xr.DataArray(
+ hours_per_step, coords={'time': timesteps_extra[:-1]}, dims='time', name='hours_per_timestep'
)
- structure = fx_io.insert_dataarray({key: ds.attrs[key] for key in ['components', 'buses', 'effects']}, ds)
- flow_system.add_elements(
- *[Bus.from_dict(bus) for bus in structure['buses'].values()]
- + [Effect.from_dict(effect) for effect in structure['effects'].values()]
- + [CLASS_REGISTRY[comp['__class__']].from_dict(comp) for comp in structure['components'].values()]
+ @staticmethod
+ def _calculate_hours_of_previous_timesteps(
+ timesteps: pd.DatetimeIndex, hours_of_previous_timesteps: float | np.ndarray | None
+ ) -> float | np.ndarray:
+ """Calculate duration of regular timesteps."""
+ if hours_of_previous_timesteps is not None:
+ return hours_of_previous_timesteps
+ # Calculate from the first interval
+ first_interval = timesteps[1] - timesteps[0]
+ return first_interval.total_seconds() / 3600 # Convert to hours
+
+ def _create_reference_structure(self) -> tuple[dict, dict[str, xr.DataArray]]:
+ """
+ Override Interface method to handle FlowSystem-specific serialization.
+ Combines custom FlowSystem logic with Interface pattern for nested objects.
+
+ Returns:
+ Tuple of (reference_structure, extracted_arrays_dict)
+ """
+ # Start with Interface base functionality for constructor parameters
+ reference_structure, all_extracted_arrays = super()._create_reference_structure()
+
+ # Remove timesteps, as it's directly stored in dataset index
+ reference_structure.pop('timesteps', None)
+
+ # Extract from components
+ components_structure = {}
+ for comp_label, component in self.components.items():
+ comp_structure, comp_arrays = component._create_reference_structure()
+ all_extracted_arrays.update(comp_arrays)
+ components_structure[comp_label] = comp_structure
+ reference_structure['components'] = components_structure
+
+ # Extract from buses
+ buses_structure = {}
+ for bus_label, bus in self.buses.items():
+ bus_structure, bus_arrays = bus._create_reference_structure()
+ all_extracted_arrays.update(bus_arrays)
+ buses_structure[bus_label] = bus_structure
+ reference_structure['buses'] = buses_structure
+
+ # Extract from effects
+ effects_structure = {}
+ for effect in self.effects:
+ effect_structure, effect_arrays = effect._create_reference_structure()
+ all_extracted_arrays.update(effect_arrays)
+ effects_structure[effect.label] = effect_structure
+ reference_structure['effects'] = effects_structure
+
+ return reference_structure, all_extracted_arrays
+
+ def to_dataset(self) -> xr.Dataset:
+ """
+ Convert the FlowSystem to an xarray Dataset.
+ Ensures FlowSystem is connected before serialization.
+
+ Returns:
+ xr.Dataset: Dataset containing all DataArrays with structure in attributes
+ """
+ if not self.connected_and_transformed:
+ logger.warning('FlowSystem is not connected_and_transformed. Connecting and transforming data now.')
+ self.connect_and_transform()
+
+ return super().to_dataset()
+
+ @classmethod
+ def from_dataset(cls, ds: xr.Dataset) -> FlowSystem:
+ """
+ Create a FlowSystem from an xarray Dataset.
+ Handles FlowSystem-specific reconstruction logic.
+
+ Args:
+ ds: Dataset containing the FlowSystem data
+
+ Returns:
+ FlowSystem instance
+ """
+ # Get the reference structure from attrs
+ reference_structure = dict(ds.attrs)
+
+ # Create arrays dictionary from dataset variables
+ arrays_dict = {name: array for name, array in ds.data_vars.items()}
+
+ # Create FlowSystem instance with constructor parameters
+ flow_system = cls(
+ timesteps=ds.indexes['time'],
+ periods=ds.indexes.get('period'),
+ scenarios=ds.indexes.get('scenario'),
+ weights=cls._resolve_dataarray_reference(reference_structure['weights'], arrays_dict)
+ if 'weights' in reference_structure
+ else None,
+ hours_of_last_timestep=reference_structure.get('hours_of_last_timestep'),
+ hours_of_previous_timesteps=reference_structure.get('hours_of_previous_timesteps'),
+ scenario_independent_sizes=reference_structure.get('scenario_independent_sizes', True),
+ scenario_independent_flow_rates=reference_structure.get('scenario_independent_flow_rates', False),
)
+
+ # Restore components
+ components_structure = reference_structure.get('components', {})
+ for comp_label, comp_data in components_structure.items():
+ component = cls._resolve_reference_structure(comp_data, arrays_dict)
+ if not isinstance(component, Component):
+ logger.critical(f'Restoring component {comp_label} failed.')
+ flow_system._add_components(component)
+
+ # Restore buses
+ buses_structure = reference_structure.get('buses', {})
+ for bus_label, bus_data in buses_structure.items():
+ bus = cls._resolve_reference_structure(bus_data, arrays_dict)
+ if not isinstance(bus, Bus):
+ logger.critical(f'Restoring bus {bus_label} failed.')
+ flow_system._add_buses(bus)
+
+ # Restore effects
+ effects_structure = reference_structure.get('effects', {})
+ for effect_label, effect_data in effects_structure.items():
+ effect = cls._resolve_reference_structure(effect_data, arrays_dict)
+ if not isinstance(effect, Effect):
+ logger.critical(f'Restoring effect {effect_label} failed.')
+ flow_system._add_effects(effect)
+
return flow_system
- @classmethod
- def from_dict(cls, data: dict) -> FlowSystem:
+ def to_netcdf(self, path: str | pathlib.Path, compression: int = 0):
"""
- Load a FlowSystem from a dictionary.
+ Save the FlowSystem to a NetCDF file.
+ Ensures FlowSystem is connected before saving.
Args:
- data: Dictionary containing the FlowSystem data.
+ path: The path to the netCDF file.
+ compression: The compression level to use when saving the file.
"""
- timesteps_extra = pd.DatetimeIndex(data['timesteps_extra'], name='time')
- hours_of_last_timestep = TimeSeriesCollection.calculate_hours_per_timestep(timesteps_extra).isel(time=-1).item()
+ if not self.connected_and_transformed:
+ logger.warning('FlowSystem is not connected. Calling connect_and_transform() now.')
+ self.connect_and_transform()
- flow_system = FlowSystem(
- timesteps=timesteps_extra[:-1],
- hours_of_last_timestep=hours_of_last_timestep,
- hours_of_previous_timesteps=data['hours_of_previous_timesteps'],
- )
+ super().to_netcdf(path, compression)
+ logger.info(f'Saved FlowSystem to {path}')
- flow_system.add_elements(*[Bus.from_dict(bus) for bus in data['buses'].values()])
+ def get_structure(self, clean: bool = False, stats: bool = False) -> dict:
+ """
+ Get FlowSystem structure.
+ Ensures FlowSystem is connected before getting structure.
- flow_system.add_elements(*[Effect.from_dict(effect) for effect in data['effects'].values()])
+ Args:
+ clean: If True, remove None and empty dicts and lists.
+ stats: If True, replace DataArray references with statistics
+ """
+ if not self.connected_and_transformed:
+ logger.warning('FlowSystem is not connected. Calling connect_and_transform() now.')
+ self.connect_and_transform()
- flow_system.add_elements(
- *[CLASS_REGISTRY[comp['__class__']].from_dict(comp) for comp in data['components'].values()]
- )
+ return super().get_structure(clean, stats)
- flow_system.transform_data()
+ def to_json(self, path: str | pathlib.Path):
+ """
+ Save the flow system to a JSON file.
+ Ensures FlowSystem is connected before saving.
- return flow_system
+ Args:
+ path: The path to the JSON file.
+ """
+ if not self.connected_and_transformed:
+ logger.warning(
+ 'FlowSystem needs to be connected and transformed before saving to JSON. Calling connect_and_transform() now.'
+ )
+ self.connect_and_transform()
- @classmethod
- def from_netcdf(cls, path: str | pathlib.Path):
+ super().to_json(path)
+
+ def fit_to_model_coords(
+ self,
+ name: str,
+ data: TemporalDataUser | PeriodicDataUser | None,
+ dims: Collection[FlowSystemDimensions] | None = None,
+ ) -> TemporalData | PeriodicData | None:
+ """
+ Fit data to model coordinate system (currently time, but extensible).
+
+ Args:
+ name: Name of the data
+ data: Data to fit to model coordinates
+ dims: Collection of dimension names to use for fitting. If None, all dimensions are used.
+
+ Returns:
+ xr.DataArray aligned to model coordinate system. If data is None, returns None.
+ """
+ if data is None:
+ return None
+
+ coords = self.coords
+
+ if dims is not None:
+ coords = {k: coords[k] for k in dims if k in coords}
+
+ # Rest of your method stays the same, just pass coords
+ if isinstance(data, TimeSeriesData):
+ try:
+ data.name = name # Set name of previous object!
+ return data.fit_to_coords(coords)
+ except ConversionError as e:
+ raise ConversionError(
+ f'Could not convert time series data "{name}" to DataArray:\n{data}\nOriginal Error: {e}'
+ ) from e
+
+ try:
+ return DataConverter.to_dataarray(data, coords=coords).rename(name)
+ except ConversionError as e:
+ raise ConversionError(f'Could not convert data "{name}" to DataArray:\n{data}\nOriginal Error: {e}') from e
+
+ def fit_effects_to_model_coords(
+ self,
+ label_prefix: str | None,
+ effect_values: TemporalEffectsUser | PeriodicEffectsUser | None,
+ label_suffix: str | None = None,
+ dims: Collection[FlowSystemDimensions] | None = None,
+ delimiter: str = '|',
+ ) -> TemporalEffects | PeriodicEffects | None:
"""
- Load a FlowSystem from a netcdf file
+ Transform EffectValues from the user to Internal Datatypes aligned with model coordinates.
"""
- return cls.from_dataset(fx_io.load_dataset_from_netcdf(path))
+ if effect_values is None:
+ return None
+
+ effect_values_dict = self.effects.create_effect_values_dict(effect_values)
+
+ return {
+ effect: self.fit_to_model_coords(
+ str(delimiter).join(filter(None, [label_prefix, effect, label_suffix])),
+ value,
+ dims=dims,
+ )
+ for effect, value in effect_values_dict.items()
+ }
+
+ def connect_and_transform(self):
+ """Transform data for all elements using the new simplified approach."""
+ if self.connected_and_transformed:
+ logger.debug('FlowSystem already connected and transformed')
+ return
+
+ self.weights = self.fit_to_model_coords('weights', self.weights, dims=['period', 'scenario'])
+
+ self._connect_network()
+ for element in list(self.components.values()) + list(self.effects.effects.values()) + list(self.buses.values()):
+ element.transform_data(self)
+ self._connected_and_transformed = True
def add_elements(self, *elements: Element) -> None:
"""
@@ -147,12 +445,12 @@ def add_elements(self, *elements: Element) -> None:
*elements: childs of Element like Boiler, HeatPump, Bus,...
modeling Elements
"""
- if self._connected:
+ if self.connected_and_transformed:
warnings.warn(
'You are adding elements to an already connected FlowSystem. This is not recommended (But it works).',
stacklevel=2,
)
- self._connected = False
+ self._connected_and_transformed = False
for new_element in list(elements):
if isinstance(new_element, Component):
self._add_components(new_element)
@@ -165,64 +463,19 @@ def add_elements(self, *elements: Element) -> None:
f'Tried to add incompatible object to FlowSystem: {type(new_element)=}: {new_element=} '
)
- def to_json(self, path: str | pathlib.Path):
- """
- Saves the flow system to a json file.
- This not meant to be reloaded and recreate the object,
- but rather used to document or compare the flow_system to others.
-
- Args:
- path: The path to the json file.
- """
- with open(path, 'w', encoding='utf-8') as f:
- json.dump(self.as_dict('stats'), f, indent=4, ensure_ascii=False)
-
- def as_dict(self, data_mode: Literal['data', 'name', 'stats'] = 'data') -> dict:
- """Convert the object to a dictionary representation."""
- data = {
- 'components': {
- comp.label: comp.to_dict()
- for comp in sorted(self.components.values(), key=lambda component: component.label.upper())
- },
- 'buses': {
- bus.label: bus.to_dict() for bus in sorted(self.buses.values(), key=lambda bus: bus.label.upper())
- },
- 'effects': {
- effect.label: effect.to_dict()
- for effect in sorted(self.effects, key=lambda effect: effect.label.upper())
- },
- 'timesteps_extra': [date.isoformat() for date in self.time_series_collection.timesteps_extra],
- 'hours_of_previous_timesteps': self.time_series_collection.hours_of_previous_timesteps,
- }
- if data_mode == 'data':
- return fx_io.replace_timeseries(data, 'data')
- elif data_mode == 'stats':
- return fx_io.remove_none_and_empty(fx_io.replace_timeseries(data, data_mode))
- return fx_io.replace_timeseries(data, data_mode)
-
- def as_dataset(self, constants_in_dataset: bool = False) -> xr.Dataset:
+ def create_model(self, normalize_weights: bool = True) -> FlowSystemModel:
"""
- Convert the FlowSystem to a xarray Dataset.
+ Create a linopy model from the FlowSystem.
Args:
- constants_in_dataset: If True, constants are included as Dataset variables.
+ normalize_weights: Whether to automatically normalize the weights (periods and scenarios) to sum up to 1 when solving.
"""
- ds = self.time_series_collection.to_dataset(include_constants=constants_in_dataset)
- ds.attrs = self.as_dict(data_mode='name')
- return ds
-
- def to_netcdf(self, path: str | pathlib.Path, compression: int = 0, constants_in_dataset: bool = True) -> None:
- """
- Saves the FlowSystem to a netCDF file.
-
- Args:
- path: The path to the netCDF file.
- compression: The compression level to use when saving the file.
- constants_in_dataset: If True, constants are included as Dataset variables.
- """
- ds = self.as_dataset(constants_in_dataset=constants_in_dataset)
- fx_io.save_dataset_to_netcdf(ds, path, compression=compression)
- logger.info(f'Saved FlowSystem to {path}')
+ if not self.connected_and_transformed:
+ raise RuntimeError(
+ 'FlowSystem is not connected_and_transformed. Call FlowSystem.connect_and_transform() first.'
+ )
+ self.model = FlowSystemModel(self, normalize_weights)
+ return self.model
def plot_network(
self,
@@ -283,7 +536,7 @@ def start_network_app(self):
f'Original error: {VISUALIZATION_ERROR}'
)
- if not self._connected:
+ if not self._connected_and_transformed:
self._connect_network()
if self._network_app is not None:
@@ -305,7 +558,7 @@ def stop_network_app(self):
)
if self._network_app is None:
- logger.warning('No network app is currently running. Cant stop it')
+ logger.warning("No network app is currently running. Can't stop it")
return
try:
@@ -318,8 +571,8 @@ def stop_network_app(self):
self._network_app = None
def network_infos(self) -> tuple[dict[str, dict[str, str]], dict[str, dict[str, str]]]:
- if not self._connected:
- self._connect_network()
+ if not self.connected_and_transformed:
+ self.connect_and_transform()
nodes = {
node.label_full: {
'label': node.label,
@@ -341,67 +594,6 @@ def network_infos(self) -> tuple[dict[str, dict[str, str]], dict[str, dict[str,
return nodes, edges
- def transform_data(self):
- if not self._connected:
- self._connect_network()
- for element in self.all_elements.values():
- element.transform_data(self)
-
- def create_time_series(
- self,
- name: str,
- data: NumericData | TimeSeriesData | TimeSeries | None,
- needs_extra_timestep: bool = False,
- ) -> TimeSeries | None:
- """
- Tries to create a TimeSeries from NumericData Data and adds it to the time_series_collection
- If the data already is a TimeSeries, nothing happens and the TimeSeries gets reset and returned
- If the data is a TimeSeriesData, it is converted to a TimeSeries, and the aggregation weights are applied.
- If the data is None, nothing happens.
- """
-
- if data is None:
- return None
- elif isinstance(data, TimeSeries):
- data.restore_data()
- if data in self.time_series_collection:
- return data
- return self.time_series_collection.create_time_series(
- data=data.active_data, name=name, needs_extra_timestep=needs_extra_timestep
- )
- return self.time_series_collection.create_time_series(
- data=data, name=name, needs_extra_timestep=needs_extra_timestep
- )
-
- def create_effect_time_series(
- self,
- label_prefix: str | None,
- effect_values: EffectValuesUser,
- label_suffix: str | None = None,
- ) -> EffectTimeSeries | None:
- """
- Transform EffectValues to EffectTimeSeries.
- Creates a TimeSeries for each key in the nested_values dictionary, using the value as the data.
-
- The resulting label of the TimeSeries is the label of the parent_element,
- followed by the label of the Effect in the nested_values and the label_suffix.
- If the key in the EffectValues is None, the alias 'Standard_Effect' is used
- """
- effect_values_dict: EffectValuesDict | None = self.effects.create_effect_values_dict(effect_values)
- if effect_values_dict is None:
- return None
-
- return {
- effect: self.create_time_series('|'.join(filter(None, [label_prefix, effect, label_suffix])), value)
- for effect, value in effect_values_dict.items()
- }
-
- def create_model(self) -> SystemModel:
- if not self._connected:
- raise RuntimeError('FlowSystem is not connected. Call FlowSystem.connect() first.')
- self.model = SystemModel(self)
- return self.model
-
def _check_if_element_is_unique(self, element: Element) -> None:
"""
checks if element or label of element already exists in list
@@ -410,25 +602,25 @@ def _check_if_element_is_unique(self, element: Element) -> None:
element: new element to check
"""
if element in self.all_elements.values():
- raise ValueError(f'Element {element.label} already added to FlowSystem!')
+ raise ValueError(f'Element {element.label_full} already added to FlowSystem!')
# check if name is already used:
if element.label_full in self.all_elements:
- raise ValueError(f'Label of Element {element.label} already used in another element!')
+ raise ValueError(f'Label of Element {element.label_full} already used in another element!')
def _add_effects(self, *args: Effect) -> None:
self.effects.add_effects(*args)
def _add_components(self, *components: Component) -> None:
for new_component in list(components):
- logger.info(f'Registered new Component: {new_component.label}')
+ logger.info(f'Registered new Component: {new_component.label_full}')
self._check_if_element_is_unique(new_component) # check if already exists:
- self.components[new_component.label] = new_component # Add to existing components
+ self.components[new_component.label_full] = new_component # Add to existing components
def _add_buses(self, *buses: Bus):
for new_bus in list(buses):
- logger.info(f'Registered new Bus: {new_bus.label}')
+ logger.info(f'Registered new Bus: {new_bus.label_full}')
self._check_if_element_is_unique(new_bus) # check if already exists:
- self.buses[new_bus.label] = new_bus # Add to existing components
+ self.buses[new_bus.label_full] = new_bus # Add to existing components
def _connect_network(self):
"""Connects the network of components and buses. Can be rerun without changes if no elements were added"""
@@ -440,7 +632,7 @@ def _connect_network(self):
# Add Bus if not already added (deprecated)
if flow._bus_object is not None and flow._bus_object not in self.buses.values():
warnings.warn(
- f'The Bus {flow._bus_object.label} was added to the FlowSystem from {flow.label_full}.'
+ f'The Bus {flow._bus_object.label_full} was added to the FlowSystem from {flow.label_full}.'
f'This is deprecated and will be removed in the future. '
f'Please pass the Bus.label to the Flow and the Bus to the FlowSystem instead.',
DeprecationWarning,
@@ -463,17 +655,106 @@ def _connect_network(self):
f'Connected {len(self.buses)} Buses and {len(self.components)} '
f'via {len(self.flows)} Flows inside the FlowSystem.'
)
- self._connected = True
- def __repr__(self):
- return f'<{self.__class__.__name__} with {len(self.components)} components and {len(self.effects)} effects>'
+ def __repr__(self) -> str:
+ """Compact representation for debugging."""
+ status = '✓' if self.connected_and_transformed else '⚠'
+
+ # Build dimension info
+ dims = f'{len(self.timesteps)} timesteps [{self.timesteps[0].strftime("%Y-%m-%d")} to {self.timesteps[-1].strftime("%Y-%m-%d")}]'
+ if self.periods is not None:
+ dims += f', {len(self.periods)} periods'
+ if self.scenarios is not None:
+ dims += f', {len(self.scenarios)} scenarios'
+
+ return f'FlowSystem({dims}, {len(self.components)} Components, {len(self.buses)} Buses, {len(self.effects)} Effects, {status})'
+
+ def __str__(self) -> str:
+ """Structured summary for users."""
+
+ def format_elements(element_names: list, label: str, alignment: int = 12):
+ name_list = ', '.join(element_names[:3])
+ if len(element_names) > 3:
+ name_list += f' ... (+{len(element_names) - 3} more)'
+
+ suffix = f' ({name_list})' if element_names else ''
+ padding = alignment - len(label) - 1 # -1 for the colon
+ return f'{label}:{"":<{padding}} {len(element_names)}{suffix}'
+
+ time_period = f'Time period: {self.timesteps[0].date()} to {self.timesteps[-1].date()}'
+ freq_str = str(self.timesteps.freq).replace('<', '').replace('>', '') if self.timesteps.freq else 'irregular'
+
+ lines = [
+ f'Timesteps: {len(self.timesteps)} ({freq_str}) [{time_period}]',
+ ]
+
+ # Add periods if present
+ if self.periods is not None:
+ period_names = ', '.join(str(p) for p in self.periods[:3])
+ if len(self.periods) > 3:
+ period_names += f' ... (+{len(self.periods) - 3} more)'
+ lines.append(f'Periods: {len(self.periods)} ({period_names})')
+
+ # Add scenarios if present
+ if self.scenarios is not None:
+ scenario_names = ', '.join(str(s) for s in self.scenarios[:3])
+ if len(self.scenarios) > 3:
+ scenario_names += f' ... (+{len(self.scenarios) - 3} more)'
+ lines.append(f'Scenarios: {len(self.scenarios)} ({scenario_names})')
+
+ lines.extend(
+ [
+ format_elements(list(self.components.keys()), 'Components'),
+ format_elements(list(self.buses.keys()), 'Buses'),
+ format_elements(list(self.effects.effects.keys()), 'Effects'),
+ f'Status: {"Connected & Transformed" if self.connected_and_transformed else "Not connected"}',
+ ]
+ )
+ lines = ['FlowSystem:', f'{"─" * max(len(line) for line in lines)}'] + lines
+
+ return '\n'.join(lines)
+
+ def __eq__(self, other: FlowSystem):
+ """Check if two FlowSystems are equal by comparing their dataset representations."""
+ if not isinstance(other, FlowSystem):
+ raise NotImplementedError('Comparison with other types is not implemented for class FlowSystem')
+
+ ds_me = self.to_dataset()
+ ds_other = other.to_dataset()
+
+ try:
+ xr.testing.assert_equal(ds_me, ds_other)
+ except AssertionError:
+ return False
+
+ if ds_me.attrs != ds_other.attrs:
+ return False
- def __str__(self):
- with StringIO() as output_buffer:
- console = Console(file=output_buffer, width=1000) # Adjust width as needed
- console.print(Pretty(self.as_dict('stats'), expand_all=True, indent_guides=True))
- value = output_buffer.getvalue()
- return value
+ return True
+
+ def __getitem__(self, item) -> Element:
+ """Get element by exact label with helpful error messages."""
+ if item in self.all_elements:
+ return self.all_elements[item]
+
+ # Provide helpful error with suggestions
+ from difflib import get_close_matches
+
+ suggestions = get_close_matches(item, self.all_elements.keys(), n=3, cutoff=0.6)
+
+ if suggestions:
+ suggestion_str = ', '.join(f"'{s}'" for s in suggestions)
+ raise KeyError(f"Element '{item}' not found. Did you mean: {suggestion_str}?")
+ else:
+ raise KeyError(f"Element '{item}' not found in FlowSystem")
+
+ def __contains__(self, item: str) -> bool:
+ """Check if element exists in the FlowSystem."""
+ return item in self.all_elements
+
+ def __iter__(self):
+ """Iterate over element labels."""
+ return iter(self.all_elements.keys())
@property
def flows(self) -> dict[str, Flow]:
@@ -483,3 +764,217 @@ def flows(self) -> dict[str, Flow]:
@property
def all_elements(self) -> dict[str, Element]:
return {**self.components, **self.effects.effects, **self.flows, **self.buses}
+
+ @property
+ def coords(self) -> dict[FlowSystemDimensions, pd.Index]:
+ active_coords = {'time': self.timesteps}
+ if self.periods is not None:
+ active_coords['period'] = self.periods
+ if self.scenarios is not None:
+ active_coords['scenario'] = self.scenarios
+ return active_coords
+
+ @property
+ def used_in_calculation(self) -> bool:
+ return self._used_in_calculation
+
+ def _validate_scenario_parameter(self, value: bool | list[str], param_name: str, element_type: str) -> None:
+ """
+ Validate scenario parameter value.
+
+ Args:
+ value: The value to validate
+ param_name: Name of the parameter (for error messages)
+ element_type: Type of elements expected in list (e.g., 'component label_full', 'flow label_full')
+
+ Raises:
+ TypeError: If value is not bool or list[str]
+ ValueError: If list contains non-string elements
+ """
+ if isinstance(value, bool):
+ return # Valid
+ elif isinstance(value, list):
+ if not all(isinstance(item, str) for item in value):
+ raise ValueError(f'{param_name} list must contain only strings ({element_type} values)')
+ else:
+ raise TypeError(f'{param_name} must be bool or list[str], got {type(value).__name__}')
+
+ @property
+ def scenario_independent_sizes(self) -> bool | list[str]:
+ """
+ Controls whether investment sizes are equalized across scenarios.
+
+ Returns:
+ bool or list[str]: Configuration for scenario-independent sizing
+ """
+ return self._scenario_independent_sizes
+
+ @scenario_independent_sizes.setter
+ def scenario_independent_sizes(self, value: bool | list[str]) -> None:
+ """
+ Set whether investment sizes should be equalized across scenarios.
+
+ Args:
+ value: True (all equalized), False (all vary), or list of component label_full strings to equalize
+
+ Raises:
+ TypeError: If value is not bool or list[str]
+ ValueError: If list contains non-string elements
+ """
+ self._validate_scenario_parameter(value, 'scenario_independent_sizes', 'Element.label_full')
+ self._scenario_independent_sizes = value
+
+ @property
+ def scenario_independent_flow_rates(self) -> bool | list[str]:
+ """
+ Controls whether flow rates are equalized across scenarios.
+
+ Returns:
+ bool or list[str]: Configuration for scenario-independent flow rates
+ """
+ return self._scenario_independent_flow_rates
+
+ @scenario_independent_flow_rates.setter
+ def scenario_independent_flow_rates(self, value: bool | list[str]) -> None:
+ """
+ Set whether flow rates should be equalized across scenarios.
+
+ Args:
+ value: True (all equalized), False (all vary), or list of flow label_full strings to equalize
+
+ Raises:
+ TypeError: If value is not bool or list[str]
+ ValueError: If list contains non-string elements
+ """
+ self._validate_scenario_parameter(value, 'scenario_independent_flow_rates', 'Flow.label_full')
+ self._scenario_independent_flow_rates = value
+
+ def sel(
+ self,
+ time: str | slice | list[str] | pd.Timestamp | pd.DatetimeIndex | None = None,
+ period: int | slice | list[int] | pd.Index | None = None,
+ scenario: str | slice | list[str] | pd.Index | None = None,
+ ) -> FlowSystem:
+ """
+ Select a subset of the flowsystem by the time coordinate.
+
+ Args:
+ time: Time selection (e.g., slice('2023-01-01', '2023-12-31'), '2023-06-15', or list of times)
+ period: Period selection (e.g., slice(2023, 2024), or list of periods)
+ scenario: Scenario selection (e.g., slice('scenario1', 'scenario2'), or list of scenarios)
+
+ Returns:
+ FlowSystem: New FlowSystem with selected data
+ """
+ if not self.connected_and_transformed:
+ self.connect_and_transform()
+
+ ds = self.to_dataset()
+
+ # Build indexers dict from non-None parameters
+ indexers = {}
+ if time is not None:
+ indexers['time'] = time
+ if period is not None:
+ indexers['period'] = period
+ if scenario is not None:
+ indexers['scenario'] = scenario
+
+ if not indexers:
+ return self.copy() # Return a copy when no selection
+
+ selected_dataset = ds.sel(**indexers)
+ return self.__class__.from_dataset(selected_dataset)
+
+ def isel(
+ self,
+ time: int | slice | list[int] | None = None,
+ period: int | slice | list[int] | None = None,
+ scenario: int | slice | list[int] | None = None,
+ ) -> FlowSystem:
+ """
+ Select a subset of the flowsystem by integer indices.
+
+ Args:
+ time: Time selection by integer index (e.g., slice(0, 100), 50, or [0, 5, 10])
+ period: Period selection by integer index (e.g., slice(0, 100), 50, or [0, 5, 10])
+ scenario: Scenario selection by integer index (e.g., slice(0, 3), 50, or [0, 5, 10])
+
+ Returns:
+ FlowSystem: New FlowSystem with selected data
+ """
+ if not self.connected_and_transformed:
+ self.connect_and_transform()
+
+ ds = self.to_dataset()
+
+ # Build indexers dict from non-None parameters
+ indexers = {}
+ if time is not None:
+ indexers['time'] = time
+ if period is not None:
+ indexers['period'] = period
+ if scenario is not None:
+ indexers['scenario'] = scenario
+
+ if not indexers:
+ return self.copy() # Return a copy when no selection
+
+ selected_dataset = ds.isel(**indexers)
+ return self.__class__.from_dataset(selected_dataset)
+
+ def resample(
+ self,
+ time: str,
+ method: Literal['mean', 'sum', 'max', 'min', 'first', 'last', 'std', 'var', 'median', 'count'] = 'mean',
+ **kwargs: Any,
+ ) -> FlowSystem:
+ """
+ Create a resampled FlowSystem by resampling data along the time dimension (like xr.Dataset.resample()).
+ Only resamples data variables that have a time dimension.
+
+ Args:
+ time: Resampling frequency (e.g., '3h', '2D', '1M')
+ method: Resampling method. Recommended: 'mean', 'first', 'last', 'max', 'min'
+ **kwargs: Additional arguments passed to xarray.resample()
+
+ Returns:
+ FlowSystem: New FlowSystem with resampled data
+ """
+ if not self.connected_and_transformed:
+ self.connect_and_transform()
+
+ dataset = self.to_dataset()
+
+ # Separate variables with and without time dimension
+ time_vars = {}
+ non_time_vars = {}
+
+ for var_name, var in dataset.data_vars.items():
+ if 'time' in var.dims:
+ time_vars[var_name] = var
+ else:
+ non_time_vars[var_name] = var
+
+ # Only resample variables that have time dimension
+ time_dataset = dataset[list(time_vars.keys())]
+ resampler = time_dataset.resample(time=time, **kwargs)
+
+ if hasattr(resampler, method):
+ resampled_time_data = getattr(resampler, method)()
+ else:
+ available_methods = ['mean', 'sum', 'max', 'min', 'first', 'last', 'std', 'var', 'median', 'count']
+ raise ValueError(f'Unsupported resampling method: {method}. Available: {available_methods}')
+
+ # Combine resampled time variables with non-time variables
+ if non_time_vars:
+ non_time_dataset = dataset[list(non_time_vars.keys())]
+ resampled_dataset = xr.merge([resampled_time_data, non_time_dataset])
+ else:
+ resampled_dataset = resampled_time_data
+
+ return self.__class__.from_dataset(resampled_dataset)
+
+ @property
+ def connected_and_transformed(self) -> bool:
+ return self._connected_and_transformed
diff --git a/flixopt/interface.py b/flixopt/interface.py
index 72737cc45..ab47c2522 100644
--- a/flixopt/interface.py
+++ b/flixopt/interface.py
@@ -6,7 +6,12 @@
from __future__ import annotations
import logging
-from typing import TYPE_CHECKING
+import warnings
+from typing import TYPE_CHECKING, Literal, Optional
+
+import numpy as np
+import pandas as pd
+import xarray as xr
from .config import CONFIG
from .structure import Interface, register_class_for_io
@@ -14,8 +19,8 @@
if TYPE_CHECKING: # for type checking and preventing circular imports
from collections.abc import Iterator
- from .core import NumericData
- from .effects import EffectValuesUser, EffectValuesUserScalar
+ from .core import PeriodicData, PeriodicDataUser, Scalar, TemporalDataUser
+ from .effects import PeriodicEffectsUser, TemporalEffectsUser
from .flow_system import FlowSystem
@@ -68,21 +73,21 @@ class Piece(Interface):
"""
- def __init__(self, start: NumericData, end: NumericData):
+ def __init__(self, start: TemporalDataUser, end: TemporalDataUser):
self.start = start
self.end = end
+ self.has_time_dim = False
- def transform_data(self, flow_system: FlowSystem, name_prefix: str):
- self.start = flow_system.create_time_series(f'{name_prefix}|start', self.start)
- self.end = flow_system.create_time_series(f'{name_prefix}|end', self.end)
+ def transform_data(self, flow_system: FlowSystem, name_prefix: str = '') -> None:
+ dims = None if self.has_time_dim else ['period', 'scenario']
+ self.start = flow_system.fit_to_model_coords(f'{name_prefix}|start', self.start, dims=dims)
+ self.end = flow_system.fit_to_model_coords(f'{name_prefix}|end', self.end, dims=dims)
@register_class_for_io
class Piecewise(Interface):
- """Define a piecewise linear function by combining multiple `Piece`s together.
-
- This class creates complex non-linear relationships by combining multiple
- Piece objects into a single piecewise linear function.
+ """
+ Define a Piecewise, consisting of a list of Pieces.
Args:
pieces: list of Piece objects defining the linear segments. The arrangement
@@ -192,6 +197,17 @@ class Piecewise(Interface):
def __init__(self, pieces: list[Piece]):
self.pieces = pieces
+ self._has_time_dim = False
+
+ @property
+ def has_time_dim(self):
+ return self._has_time_dim
+
+ @has_time_dim.setter
+ def has_time_dim(self, value):
+ self._has_time_dim = value
+ for piece in self.pieces:
+ piece.has_time_dim = value
def __len__(self):
"""
@@ -208,7 +224,7 @@ def __getitem__(self, index) -> Piece:
def __iter__(self) -> Iterator[Piece]:
return iter(self.pieces) # Enables iteration like for piece in piecewise: ...
- def transform_data(self, flow_system: FlowSystem, name_prefix: str):
+ def transform_data(self, flow_system: FlowSystem, name_prefix: str = '') -> None:
for i, piece in enumerate(self.pieces):
piece.transform_data(flow_system, f'{name_prefix}|Piece{i}')
@@ -228,6 +244,10 @@ class PiecewiseConversion(Interface):
When the equipment operates at a given point, ALL flows scale proportionally
within their respective pieces.
+ Mathematical Formulation:
+ See the complete mathematical model in the documentation:
+ [Piecewise](../user-guide/mathematical-notation/features/Piecewise.md)
+
Args:
piecewises: Dictionary mapping flow labels to their Piecewise functions.
Keys are flow identifiers (e.g., 'electricity_in', 'heat_out', 'fuel_consumed').
@@ -408,6 +428,18 @@ class PiecewiseConversion(Interface):
def __init__(self, piecewises: dict[str, Piecewise]):
self.piecewises = piecewises
+ self._has_time_dim = True
+ self.has_time_dim = True # Initial propagation
+
+ @property
+ def has_time_dim(self):
+ return self._has_time_dim
+
+ @has_time_dim.setter
+ def has_time_dim(self, value):
+ self._has_time_dim = value
+ for piecewise in self.piecewises.values():
+ piecewise.has_time_dim = value
def items(self):
"""
@@ -418,7 +450,7 @@ def items(self):
"""
return self.piecewises.items()
- def transform_data(self, flow_system: FlowSystem, name_prefix: str):
+ def transform_data(self, flow_system: FlowSystem, name_prefix: str = '') -> None:
for name, piecewise in self.piecewises.items():
piecewise.transform_data(flow_system, f'{name_prefix}|{name}')
@@ -616,12 +648,24 @@ class PiecewiseEffects(Interface):
def __init__(self, piecewise_origin: Piecewise, piecewise_shares: dict[str, Piecewise]):
self.piecewise_origin = piecewise_origin
self.piecewise_shares = piecewise_shares
+ self._has_time_dim = False
+ self.has_time_dim = False # Initial propagation
+
+ @property
+ def has_time_dim(self):
+ return self._has_time_dim
- def transform_data(self, flow_system: FlowSystem, name_prefix: str):
- raise NotImplementedError('PiecewiseEffects is not yet implemented for non scalar shares')
- # self.piecewise_origin.transform_data(flow_system, f'{name_prefix}|PiecewiseEffects|origin')
- # for name, piecewise in self.piecewise_shares.items():
- # piecewise.transform_data(flow_system, f'{name_prefix}|PiecewiseEffects|{name}')
+ @has_time_dim.setter
+ def has_time_dim(self, value):
+ self._has_time_dim = value
+ self.piecewise_origin.has_time_dim = value
+ for piecewise in self.piecewise_shares.values():
+ piecewise.has_time_dim = value
+
+ def transform_data(self, flow_system: FlowSystem, name_prefix: str = '') -> None:
+ self.piecewise_origin.transform_data(flow_system, f'{name_prefix}|PiecewiseEffects|origin')
+ for effect, piecewise in self.piecewise_shares.items():
+ piecewise.transform_data(flow_system, f'{name_prefix}|PiecewiseEffects|{effect}')
@register_class_for_io
@@ -646,30 +690,41 @@ class InvestParameters(Interface):
- **Piecewise Effects**: Non-linear relationships (bulk discounts, learning curves)
- **Divestment Effects**: Penalties for not investing (demolition, opportunity costs)
+ Mathematical Formulation:
+ See the complete mathematical model in the documentation:
+ [InvestParameters](../user-guide/mathematical-notation/features/InvestParameters.md)
+
Args:
- fixed_size: When specified, creates a binary investment decision at exactly
- this size. When None, allows continuous sizing between minimum and maximum bounds.
- minimum_size: Lower bound for continuous sizing decisions. Defaults to a small
- positive value (CONFIG.Modeling.epsilon) to avoid numerical issues.
- Ignored when fixed_size is specified.
- maximum_size: Upper bound for continuous sizing decisions. Defaults to a large
- value (CONFIG.Modeling.big) representing unlimited capacity.
- Ignored when fixed_size is specified.
- optional: Controls whether investment is required. When True (default),
- optimization can choose not to invest. When False, forces investment
+ fixed_size: Creates binary decision at this exact size. None allows continuous sizing.
+ minimum_size: Lower bound for continuous sizing. Default: CONFIG.Modeling.epsilon.
+ Ignored if fixed_size is specified.
+ maximum_size: Upper bound for continuous sizing. Default: CONFIG.Modeling.big.
+ Ignored if fixed_size is specified.
+ mandatory: Controls whether investment is required. When True, forces investment
to occur (useful for mandatory upgrades or replacement decisions).
- fix_effects: Fixed costs incurred once if investment is made, regardless
- of size. Dictionary mapping effect names to values
- (e.g., {'cost': 10000, 'CO2_construction': 500}).
- specific_effects: Variable costs proportional to investment size, representing
- per-unit costs (€/kW, €/m²). Dictionary mapping effect names to unit values
- (e.g., {'cost': 1200, 'steel_required': 0.5}).
- piecewise_effects: Non-linear cost relationships using PiecewiseEffects for
- economies of scale, learning curves, or threshold effects. Can be combined
- with fix_effects and specific_effects.
- divest_effects: Costs incurred if the investment is NOT made, such as
- demolition of existing equipment, contractual penalties, or lost opportunities.
- Dictionary mapping effect names to values.
+ When False (default), optimization can choose not to invest.
+ With multiple periods, at least one period has to have an investment.
+ effects_of_investment: Fixed costs if investment is made, regardless of size.
+ Dict: {'effect_name': value} (e.g., {'cost': 10000}).
+ effects_of_investment_per_size: Variable costs proportional to size (per-unit costs).
+ Dict: {'effect_name': value/unit} (e.g., {'cost': 1200}).
+ piecewise_effects_of_investment: Non-linear costs using PiecewiseEffects.
+ Combinable with effects_of_investment and effects_of_investment_per_size.
+ effects_of_retirement: Costs incurred if NOT investing (demolition, penalties).
+ Dict: {'effect_name': value}.
+
+ Deprecated Args:
+ fix_effects: **Deprecated**. Use `effects_of_investment` instead.
+ Will be removed in version 4.0.
+ specific_effects: **Deprecated**. Use `effects_of_investment_per_size` instead.
+ Will be removed in version 4.0.
+ divest_effects: **Deprecated**. Use `effects_of_retirement` instead.
+ Will be removed in version 4.0.
+ piecewise_effects: **Deprecated**. Use `piecewise_effects_of_investment` instead.
+ Will be removed in version 4.0.
+ optional: DEPRECATED. Use `mandatory` instead. Opposite of `mandatory`.
+ Will be removed in version 4.0.
+ linked_periods: Describes which periods are linked. 1 means linked, 0 means size=0. None means no linked periods.
Cost Annualization Requirements:
All cost values must be properly weighted to match the optimization model's time horizon.
@@ -687,12 +742,12 @@ class InvestParameters(Interface):
```python
solar_investment = InvestParameters(
fixed_size=100, # 100 kW system (binary decision)
- optional=True,
- fix_effects={
+ mandatory=False, # Investment is optional
+ effects_of_investment={
'cost': 25000, # Installation and permitting costs
'CO2': -50000, # Avoided emissions over lifetime
},
- specific_effects={
+ effects_of_investment_per_size={
'cost': 1200, # €1200/kW for panels (annualized)
'CO2': -800, # kg CO2 avoided per kW annually
},
@@ -705,12 +760,12 @@ class InvestParameters(Interface):
battery_investment = InvestParameters(
minimum_size=10, # Minimum viable system size (kWh)
maximum_size=1000, # Maximum installable capacity
- optional=True,
- fix_effects={
+ mandatory=False, # Investment is optional
+ effects_of_investment={
'cost': 5000, # Grid connection and control system
'installation_time': 2, # Days for fixed components
},
- piecewise_effects=PiecewiseEffects(
+ piecewise_effects_of_investment=PiecewiseEffects(
piecewise_origin=Piecewise(
[
Piece(0, 100), # Small systems
@@ -731,22 +786,22 @@ class InvestParameters(Interface):
)
```
- Mandatory replacement with divestment costs:
+ Mandatory replacement with retirement costs:
```python
boiler_replacement = InvestParameters(
minimum_size=50,
maximum_size=200,
- optional=True, # Can choose not to replace
- fix_effects={
+ mandatory=False, # Can choose not to replace
+ effects_of_investment={
'cost': 15000, # Installation costs
'disruption': 3, # Days of downtime
},
- specific_effects={
+ effects_of_investment_per_size={
'cost': 400, # €400/kW capacity
'maintenance': 25, # Annual maintenance per kW
},
- divest_effects={
+ effects_of_retirement={
'cost': 8000, # Demolition if not replaced
'environmental': 100, # Disposal fees
},
@@ -759,16 +814,16 @@ class InvestParameters(Interface):
# Gas turbine option
gas_turbine = InvestParameters(
fixed_size=50, # MW
- fix_effects={'cost': 2500000, 'CO2': 1250000},
- specific_effects={'fuel_cost': 45, 'maintenance': 12},
+ effects_of_investment={'cost': 2500000, 'CO2': 1250000},
+ effects_of_investment_per_size={'fuel_cost': 45, 'maintenance': 12},
)
# Wind farm option
wind_farm = InvestParameters(
minimum_size=20,
maximum_size=100,
- fix_effects={'cost': 1000000, 'CO2': -5000000},
- specific_effects={'cost': 1800000, 'land_use': 0.5},
+ effects_of_investment={'cost': 1000000, 'CO2': -5000000},
+ effects_of_investment_per_size={'cost': 1800000, 'land_use': 0.5},
)
```
@@ -778,7 +833,7 @@ class InvestParameters(Interface):
hydrogen_electrolyzer = InvestParameters(
minimum_size=1,
maximum_size=50, # MW
- piecewise_effects=PiecewiseEffects(
+ piecewise_effects_of_investment=PiecewiseEffects(
piecewise_origin=Piecewise(
[
Piece(0, 5), # Small scale: early adoption
@@ -818,36 +873,188 @@ class InvestParameters(Interface):
def __init__(
self,
- fixed_size: int | float | None = None,
- minimum_size: int | float | None = None,
- maximum_size: int | float | None = None,
- optional: bool = True, # Investition ist weglassbar
- fix_effects: EffectValuesUserScalar | None = None,
- specific_effects: EffectValuesUserScalar | None = None, # costs per Flow-Unit/Storage-Size/...
- piecewise_effects: PiecewiseEffects | None = None,
- divest_effects: EffectValuesUserScalar | None = None,
+ fixed_size: PeriodicDataUser | None = None,
+ minimum_size: PeriodicDataUser | None = None,
+ maximum_size: PeriodicDataUser | None = None,
+ mandatory: bool = False,
+ effects_of_investment: PeriodicEffectsUser | None = None,
+ effects_of_investment_per_size: PeriodicEffectsUser | None = None,
+ effects_of_retirement: PeriodicEffectsUser | None = None,
+ piecewise_effects_of_investment: PiecewiseEffects | None = None,
+ linked_periods: PeriodicDataUser | tuple[int, int] | None = None,
+ **kwargs,
):
- self.fix_effects: EffectValuesUserScalar = fix_effects or {}
- self.divest_effects: EffectValuesUserScalar = divest_effects or {}
+ # Handle deprecated parameters using centralized helper
+ effects_of_investment = self._handle_deprecated_kwarg(
+ kwargs, 'fix_effects', 'effects_of_investment', effects_of_investment
+ )
+ effects_of_investment_per_size = self._handle_deprecated_kwarg(
+ kwargs, 'specific_effects', 'effects_of_investment_per_size', effects_of_investment_per_size
+ )
+ effects_of_retirement = self._handle_deprecated_kwarg(
+ kwargs, 'divest_effects', 'effects_of_retirement', effects_of_retirement
+ )
+ piecewise_effects_of_investment = self._handle_deprecated_kwarg(
+ kwargs, 'piecewise_effects', 'piecewise_effects_of_investment', piecewise_effects_of_investment
+ )
+ # For mandatory parameter with non-None default, disable conflict checking
+ if 'optional' in kwargs:
+ warnings.warn(
+ 'Deprecated parameter "optional" used. Check conflicts with new parameter "mandatory" manually!',
+ DeprecationWarning,
+ stacklevel=2,
+ )
+ mandatory = self._handle_deprecated_kwarg(
+ kwargs, 'optional', 'mandatory', mandatory, transform=lambda x: not x, check_conflict=False
+ )
+
+ # Validate any remaining unexpected kwargs
+ self._validate_kwargs(kwargs)
+
+ self.effects_of_investment: PeriodicEffectsUser = (
+ effects_of_investment if effects_of_investment is not None else {}
+ )
+ self.effects_of_retirement: PeriodicEffectsUser = (
+ effects_of_retirement if effects_of_retirement is not None else {}
+ )
self.fixed_size = fixed_size
- self.optional = optional
- self.specific_effects: EffectValuesUserScalar = specific_effects or {}
- self.piecewise_effects = piecewise_effects
- self._minimum_size = minimum_size if minimum_size is not None else CONFIG.Modeling.epsilon
- self._maximum_size = maximum_size if maximum_size is not None else CONFIG.Modeling.big # default maximum
+ self.mandatory = mandatory
+ self.effects_of_investment_per_size: PeriodicEffectsUser = (
+ effects_of_investment_per_size if effects_of_investment_per_size is not None else {}
+ )
+ self.piecewise_effects_of_investment = piecewise_effects_of_investment
+ self.minimum_size = minimum_size if minimum_size is not None else CONFIG.Modeling.epsilon
+ self.maximum_size = maximum_size if maximum_size is not None else CONFIG.Modeling.big # default maximum
+ self.linked_periods = linked_periods
+
+ def transform_data(self, flow_system: FlowSystem, name_prefix: str = '') -> None:
+ self.effects_of_investment = flow_system.fit_effects_to_model_coords(
+ label_prefix=name_prefix,
+ effect_values=self.effects_of_investment,
+ label_suffix='effects_of_investment',
+ dims=['period', 'scenario'],
+ )
+ self.effects_of_retirement = flow_system.fit_effects_to_model_coords(
+ label_prefix=name_prefix,
+ effect_values=self.effects_of_retirement,
+ label_suffix='effects_of_retirement',
+ dims=['period', 'scenario'],
+ )
+ self.effects_of_investment_per_size = flow_system.fit_effects_to_model_coords(
+ label_prefix=name_prefix,
+ effect_values=self.effects_of_investment_per_size,
+ label_suffix='effects_of_investment_per_size',
+ dims=['period', 'scenario'],
+ )
- def transform_data(self, flow_system: FlowSystem):
- self.fix_effects = flow_system.effects.create_effect_values_dict(self.fix_effects)
- self.divest_effects = flow_system.effects.create_effect_values_dict(self.divest_effects)
- self.specific_effects = flow_system.effects.create_effect_values_dict(self.specific_effects)
+ if self.piecewise_effects_of_investment is not None:
+ self.piecewise_effects_of_investment.has_time_dim = False
+ self.piecewise_effects_of_investment.transform_data(flow_system, f'{name_prefix}|PiecewiseEffects')
+
+ self.minimum_size = flow_system.fit_to_model_coords(
+ f'{name_prefix}|minimum_size', self.minimum_size, dims=['period', 'scenario']
+ )
+ self.maximum_size = flow_system.fit_to_model_coords(
+ f'{name_prefix}|maximum_size', self.maximum_size, dims=['period', 'scenario']
+ )
+ # Convert tuple (first_period, last_period) to DataArray if needed
+ if isinstance(self.linked_periods, (tuple, list)):
+ if len(self.linked_periods) != 2:
+ raise TypeError(
+ f'If you provide a tuple to "linked_periods", it needs to be len=2. Got {len(self.linked_periods)=}'
+ )
+ logger.debug(f'Computing linked_periods from {self.linked_periods}')
+ start, end = self.linked_periods
+ if start not in flow_system.periods.values:
+ logger.warning(
+ f'Start of linked periods ({start} not found in periods directly: {flow_system.periods.values}'
+ )
+ if end not in flow_system.periods.values:
+ logger.warning(
+ f'End of linked periods ({end} not found in periods directly: {flow_system.periods.values}'
+ )
+ self.linked_periods = self.compute_linked_periods(start, end, flow_system.periods)
+ logger.debug(f'Computed {self.linked_periods=}')
+
+ self.linked_periods = flow_system.fit_to_model_coords(
+ f'{name_prefix}|linked_periods', self.linked_periods, dims=['period', 'scenario']
+ )
+ self.fixed_size = flow_system.fit_to_model_coords(
+ f'{name_prefix}|fixed_size', self.fixed_size, dims=['period', 'scenario']
+ )
@property
- def minimum_size(self):
- return self.fixed_size or self._minimum_size
+ def optional(self) -> bool:
+ """DEPRECATED: Use 'mandatory' property instead. Returns the opposite of 'mandatory'."""
+ import warnings
+
+ warnings.warn("Property 'optional' is deprecated. Use 'mandatory' instead.", DeprecationWarning, stacklevel=2)
+ return not self.mandatory
+
+ @optional.setter
+ def optional(self, value: bool):
+ """DEPRECATED: Use 'mandatory' property instead. Sets the opposite of the given value to 'mandatory'."""
+ warnings.warn("Property 'optional' is deprecated. Use 'mandatory' instead.", DeprecationWarning, stacklevel=2)
+ self.mandatory = not value
@property
- def maximum_size(self):
- return self.fixed_size or self._maximum_size
+ def fix_effects(self) -> PeriodicEffectsUser:
+ """Deprecated property. Use effects_of_investment instead."""
+ warnings.warn(
+ 'The fix_effects property is deprecated. Use effects_of_investment instead.',
+ DeprecationWarning,
+ stacklevel=2,
+ )
+ return self.effects_of_investment
+
+ @property
+ def specific_effects(self) -> PeriodicEffectsUser:
+ """Deprecated property. Use effects_of_investment_per_size instead."""
+ warnings.warn(
+ 'The specific_effects property is deprecated. Use effects_of_investment_per_size instead.',
+ DeprecationWarning,
+ stacklevel=2,
+ )
+ return self.effects_of_investment_per_size
+
+ @property
+ def divest_effects(self) -> PeriodicEffectsUser:
+ """Deprecated property. Use effects_of_retirement instead."""
+ warnings.warn(
+ 'The divest_effects property is deprecated. Use effects_of_retirement instead.',
+ DeprecationWarning,
+ stacklevel=2,
+ )
+ return self.effects_of_retirement
+
+ @property
+ def piecewise_effects(self) -> PiecewiseEffects | None:
+ """Deprecated property. Use piecewise_effects_of_investment instead."""
+ warnings.warn(
+ 'The piecewise_effects property is deprecated. Use piecewise_effects_of_investment instead.',
+ DeprecationWarning,
+ stacklevel=2,
+ )
+ return self.piecewise_effects_of_investment
+
+ @property
+ def minimum_or_fixed_size(self) -> PeriodicData:
+ return self.fixed_size if self.fixed_size is not None else self.minimum_size
+
+ @property
+ def maximum_or_fixed_size(self) -> PeriodicData:
+ return self.fixed_size if self.fixed_size is not None else self.maximum_size
+
+ @staticmethod
+ def compute_linked_periods(first_period: int, last_period: int, periods: pd.Index | list[int]) -> xr.DataArray:
+ return xr.DataArray(
+ xr.where(
+ (first_period <= np.array(periods)) & (np.array(periods) <= last_period),
+ 1,
+ 0,
+ ),
+ coords=(pd.Index(periods, name='period'),),
+ ).rename('linked_periods')
@register_class_for_io
@@ -872,6 +1079,10 @@ class OnOffParameters(Interface):
- **Backup Equipment**: Emergency generators, standby systems
- **Process Equipment**: Compressors, pumps with operational constraints
+ Mathematical Formulation:
+ See the complete mathematical model in the documentation:
+ [OnOffParameters](../user-guide/mathematical-notation/features/OnOffParameters.md)
+
Args:
effects_per_switch_on: Costs or impacts incurred for each transition from
off state (var_on=0) to on state (var_on=1). Represents startup costs,
@@ -950,7 +1161,7 @@ class OnOffParameters(Interface):
consecutive_on_hours_min=12, # Minimum batch size (12 hours)
consecutive_on_hours_max=24, # Maximum batch size (24 hours)
consecutive_off_hours_min=6, # Cleaning and setup time
- switch_on_total_max=200, # Maximum 200 batches per year
+ switch_on_total_max=200, # Maximum 200 batches per period
on_hours_total_max=4000, # Maximum production time
)
```
@@ -1030,47 +1241,60 @@ class OnOffParameters(Interface):
def __init__(
self,
- effects_per_switch_on: EffectValuesUser | None = None,
- effects_per_running_hour: EffectValuesUser | None = None,
+ effects_per_switch_on: TemporalEffectsUser | None = None,
+ effects_per_running_hour: TemporalEffectsUser | None = None,
on_hours_total_min: int | None = None,
on_hours_total_max: int | None = None,
- consecutive_on_hours_min: NumericData | None = None,
- consecutive_on_hours_max: NumericData | None = None,
- consecutive_off_hours_min: NumericData | None = None,
- consecutive_off_hours_max: NumericData | None = None,
+ consecutive_on_hours_min: TemporalDataUser | None = None,
+ consecutive_on_hours_max: TemporalDataUser | None = None,
+ consecutive_off_hours_min: TemporalDataUser | None = None,
+ consecutive_off_hours_max: TemporalDataUser | None = None,
switch_on_total_max: int | None = None,
force_switch_on: bool = False,
):
- self.effects_per_switch_on: EffectValuesUser = effects_per_switch_on or {}
- self.effects_per_running_hour: EffectValuesUser = effects_per_running_hour or {}
- self.on_hours_total_min = on_hours_total_min
- self.on_hours_total_max = on_hours_total_max
- self.consecutive_on_hours_min = consecutive_on_hours_min
- self.consecutive_on_hours_max = consecutive_on_hours_max
- self.consecutive_off_hours_min = consecutive_off_hours_min
- self.consecutive_off_hours_max = consecutive_off_hours_max
- self.switch_on_total_max = switch_on_total_max
+ self.effects_per_switch_on: TemporalEffectsUser = (
+ effects_per_switch_on if effects_per_switch_on is not None else {}
+ )
+ self.effects_per_running_hour: TemporalEffectsUser = (
+ effects_per_running_hour if effects_per_running_hour is not None else {}
+ )
+ self.on_hours_total_min: Scalar = on_hours_total_min
+ self.on_hours_total_max: Scalar = on_hours_total_max
+ self.consecutive_on_hours_min: TemporalDataUser = consecutive_on_hours_min
+ self.consecutive_on_hours_max: TemporalDataUser = consecutive_on_hours_max
+ self.consecutive_off_hours_min: TemporalDataUser = consecutive_off_hours_min
+ self.consecutive_off_hours_max: TemporalDataUser = consecutive_off_hours_max
+ self.switch_on_total_max: Scalar = switch_on_total_max
self.force_switch_on: bool = force_switch_on
- def transform_data(self, flow_system: FlowSystem, name_prefix: str):
- self.effects_per_switch_on = flow_system.create_effect_time_series(
+ def transform_data(self, flow_system: FlowSystem, name_prefix: str = '') -> None:
+ self.effects_per_switch_on = flow_system.fit_effects_to_model_coords(
name_prefix, self.effects_per_switch_on, 'per_switch_on'
)
- self.effects_per_running_hour = flow_system.create_effect_time_series(
+ self.effects_per_running_hour = flow_system.fit_effects_to_model_coords(
name_prefix, self.effects_per_running_hour, 'per_running_hour'
)
- self.consecutive_on_hours_min = flow_system.create_time_series(
+ self.consecutive_on_hours_min = flow_system.fit_to_model_coords(
f'{name_prefix}|consecutive_on_hours_min', self.consecutive_on_hours_min
)
- self.consecutive_on_hours_max = flow_system.create_time_series(
+ self.consecutive_on_hours_max = flow_system.fit_to_model_coords(
f'{name_prefix}|consecutive_on_hours_max', self.consecutive_on_hours_max
)
- self.consecutive_off_hours_min = flow_system.create_time_series(
+ self.consecutive_off_hours_min = flow_system.fit_to_model_coords(
f'{name_prefix}|consecutive_off_hours_min', self.consecutive_off_hours_min
)
- self.consecutive_off_hours_max = flow_system.create_time_series(
+ self.consecutive_off_hours_max = flow_system.fit_to_model_coords(
f'{name_prefix}|consecutive_off_hours_max', self.consecutive_off_hours_max
)
+ self.on_hours_total_max = flow_system.fit_to_model_coords(
+ f'{name_prefix}|on_hours_total_max', self.on_hours_total_max, dims=['period', 'scenario']
+ )
+ self.on_hours_total_min = flow_system.fit_to_model_coords(
+ f'{name_prefix}|on_hours_total_min', self.on_hours_total_min, dims=['period', 'scenario']
+ )
+ self.switch_on_total_max = flow_system.fit_to_model_coords(
+ f'{name_prefix}|switch_on_total_max', self.switch_on_total_max, dims=['period', 'scenario']
+ )
@property
def use_off(self) -> bool:
@@ -1089,16 +1313,14 @@ def use_consecutive_off_hours(self) -> bool:
@property
def use_switch_on(self) -> bool:
- """Determines whether a Variable for SWITCH-ON is needed or not"""
- return (
- any(
- param not in (None, {})
- for param in [
- self.effects_per_switch_on,
- self.switch_on_total_max,
- self.on_hours_total_min,
- self.on_hours_total_max,
- ]
- )
- or self.force_switch_on
+ """Determines whether a variable for switch_on is needed or not"""
+ if self.force_switch_on:
+ return True
+
+ return any(
+ param is not None and param != {}
+ for param in [
+ self.effects_per_switch_on,
+ self.switch_on_total_max,
+ ]
)
diff --git a/flixopt/io.py b/flixopt/io.py
index 314f693db..53d3d8e8a 100644
--- a/flixopt/io.py
+++ b/flixopt/io.py
@@ -11,50 +11,12 @@
import xarray as xr
import yaml
-from .core import TimeSeries
-
if TYPE_CHECKING:
import linopy
logger = logging.getLogger('flixopt')
-def replace_timeseries(obj, mode: Literal['name', 'stats', 'data'] = 'name'):
- """Recursively replaces TimeSeries objects with their names prefixed by '::::'."""
- if isinstance(obj, dict):
- return {k: replace_timeseries(v, mode) for k, v in obj.items()}
- elif isinstance(obj, list):
- return [replace_timeseries(v, mode) for v in obj]
- elif isinstance(obj, TimeSeries): # Adjust this based on the actual class
- if obj.all_equal:
- return obj.active_data.values[0].item()
- elif mode == 'name':
- return f'::::{obj.name}'
- elif mode == 'stats':
- return obj.stats
- elif mode == 'data':
- return obj
- else:
- raise ValueError(f'Invalid mode {mode}')
- else:
- return obj
-
-
-def insert_dataarray(obj, ds: xr.Dataset):
- """Recursively inserts TimeSeries objects into a dataset."""
- if isinstance(obj, dict):
- return {k: insert_dataarray(v, ds) for k, v in obj.items()}
- elif isinstance(obj, list):
- return [insert_dataarray(v, ds) for v in obj]
- elif isinstance(obj, str) and obj.startswith('::::'):
- da = ds[obj[4:]]
- if da.isel(time=-1).isnull():
- return da.isel(time=slice(0, -1))
- return da
- else:
- return obj
-
-
def remove_none_and_empty(obj):
"""Recursively removes None and empty dicts and lists values from a dictionary or list."""
@@ -83,15 +45,17 @@ def _save_to_yaml(data, output_file='formatted_output.yaml'):
output_file (str): Path to output YAML file
"""
# Process strings to normalize all newlines and handle special patterns
- processed_data = _process_complex_strings(data)
+ processed_data = _normalize_complex_data(data)
# Define a custom representer for strings
def represent_str(dumper, data):
- # Use literal block style (|) for any string with newlines
+ # Use literal block style (|) for multi-line strings
if '\n' in data:
+ # Clean up formatting for literal block style
+ data = data.strip() # Remove leading/trailing whitespace
return dumper.represent_scalar('tag:yaml.org,2002:str', data, style='|')
- # Use quoted style for strings with special characters to ensure proper parsing
+ # Use quoted style for strings with special characters
elif any(char in data for char in ':`{}[]#,&*!|>%@'):
return dumper.represent_scalar('tag:yaml.org,2002:str', data, style='"')
@@ -101,53 +65,80 @@ def represent_str(dumper, data):
# Add the string representer to SafeDumper
yaml.add_representer(str, represent_str, Dumper=yaml.SafeDumper)
+ # Configure dumper options for better formatting
+ class CustomDumper(yaml.SafeDumper):
+ def increase_indent(self, flow=False, indentless=False):
+ return super().increase_indent(flow, False)
+
# Write to file with settings that ensure proper formatting
with open(output_file, 'w', encoding='utf-8') as file:
yaml.dump(
processed_data,
file,
- Dumper=yaml.SafeDumper,
+ Dumper=CustomDumper,
sort_keys=False, # Preserve dictionary order
default_flow_style=False, # Use block style for mappings
- width=float('inf'), # Don't wrap long lines
+ width=1000, # Set a reasonable line width
allow_unicode=True, # Support Unicode characters
+ indent=2, # Set consistent indentation
)
-def _process_complex_strings(data):
+def _normalize_complex_data(data):
"""
- Process dictionary data recursively with comprehensive string normalization.
- Handles various types of strings and special formatting.
+ Recursively normalize strings in complex data structures.
+
+ Handles dictionaries, lists, and strings, applying various text normalization
+ rules while preserving important formatting elements.
Args:
- data: The data to process (dict, list, str, or other)
+ data: Any data type (dict, list, str, or primitive)
Returns:
- Processed data with normalized strings
+ Data with all strings normalized according to defined rules
"""
if isinstance(data, dict):
- return {k: _process_complex_strings(v) for k, v in data.items()}
+ return {key: _normalize_complex_data(value) for key, value in data.items()}
+
elif isinstance(data, list):
- return [_process_complex_strings(item) for item in data]
+ return [_normalize_complex_data(item) for item in data]
+
elif isinstance(data, str):
- # Step 1: Normalize line endings to \n
- normalized = data.replace('\r\n', '\n').replace('\r', '\n')
+ return _normalize_string_content(data)
+
+ else:
+ return data
+
+
+def _normalize_string_content(text):
+ """
+ Apply comprehensive string normalization rules.
+
+ Args:
+ text: The string to normalize
- # Step 2: Handle escaped newlines with robust regex
- normalized = re.sub(r'(? dict[str, str]:
@@ -211,7 +202,7 @@ def save_dataset_to_netcdf(
engine: Literal['netcdf4', 'scipy', 'h5netcdf'] = 'h5netcdf',
) -> None:
"""
- Save a dataset to a netcdf file. Store the attrs as a json string in the 'attrs' attribute.
+ Save a dataset to a netcdf file. Store all attrs as JSON strings in 'attrs' attributes.
Args:
ds: Dataset to save.
@@ -234,8 +225,20 @@ def save_dataset_to_netcdf(
f'Dataset was exported without compression due to missing dependency "{engine}".'
f'Install {engine} via `pip install {engine}`.'
)
+
ds = ds.copy(deep=True)
ds.attrs = {'attrs': json.dumps(ds.attrs)}
+
+ # Convert all DataArray attrs to JSON strings
+ for var_name, data_var in ds.data_vars.items():
+ if data_var.attrs: # Only if there are attrs
+ ds[var_name].attrs = {'attrs': json.dumps(data_var.attrs)}
+
+ # Also handle coordinate attrs if they exist
+ for coord_name, coord_var in ds.coords.items():
+ if hasattr(coord_var, 'attrs') and coord_var.attrs:
+ ds[coord_name].attrs = {'attrs': json.dumps(coord_var.attrs)}
+
ds.to_netcdf(
path,
encoding=None
@@ -247,16 +250,30 @@ def save_dataset_to_netcdf(
def load_dataset_from_netcdf(path: str | pathlib.Path) -> xr.Dataset:
"""
- Load a dataset from a netcdf file. Load the attrs from the 'attrs' attribute.
+ Load a dataset from a netcdf file. Load all attrs from 'attrs' attributes.
Args:
path: Path to load the dataset from.
Returns:
- Dataset: Loaded dataset.
+ Dataset: Loaded dataset with restored attrs.
"""
ds = xr.load_dataset(str(path), engine='h5netcdf')
- ds.attrs = json.loads(ds.attrs['attrs'])
+
+ # Restore Dataset attrs
+ if 'attrs' in ds.attrs:
+ ds.attrs = json.loads(ds.attrs['attrs'])
+
+ # Restore DataArray attrs
+ for var_name, data_var in ds.data_vars.items():
+ if 'attrs' in data_var.attrs:
+ ds[var_name].attrs = json.loads(data_var.attrs['attrs'])
+
+ # Restore coordinate attrs
+ for coord_name, coord_var in ds.coords.items():
+ if hasattr(coord_var, 'attrs') and 'attrs' in coord_var.attrs:
+ ds[coord_name].attrs = json.loads(coord_var.attrs['attrs'])
+
return ds
diff --git a/flixopt/linear_converters.py b/flixopt/linear_converters.py
index aa2df9fc9..47c545506 100644
--- a/flixopt/linear_converters.py
+++ b/flixopt/linear_converters.py
@@ -10,7 +10,7 @@
import numpy as np
from .components import LinearConverter
-from .core import NumericDataTS, TimeSeriesData
+from .core import TemporalDataUser, TimeSeriesData
from .structure import register_class_for_io
if TYPE_CHECKING:
@@ -76,7 +76,7 @@ class Boiler(LinearConverter):
def __init__(
self,
label: str,
- eta: NumericDataTS,
+ eta: TemporalDataUser,
Q_fu: Flow,
Q_th: Flow,
on_off_parameters: OnOffParameters | None = None,
@@ -163,7 +163,7 @@ class Power2Heat(LinearConverter):
def __init__(
self,
label: str,
- eta: NumericDataTS,
+ eta: TemporalDataUser,
P_el: Flow,
Q_th: Flow,
on_off_parameters: OnOffParameters | None = None,
@@ -250,7 +250,7 @@ class HeatPump(LinearConverter):
def __init__(
self,
label: str,
- COP: NumericDataTS,
+ COP: TemporalDataUser,
P_el: Flow,
Q_th: Flow,
on_off_parameters: OnOffParameters | None = None,
@@ -339,7 +339,7 @@ class CoolingTower(LinearConverter):
def __init__(
self,
label: str,
- specific_electricity_demand: NumericDataTS,
+ specific_electricity_demand: TemporalDataUser,
P_el: Flow,
Q_th: Flow,
on_off_parameters: OnOffParameters | None = None,
@@ -349,7 +349,7 @@ def __init__(
label,
inputs=[P_el, Q_th],
outputs=[],
- conversion_factors=[{P_el.label: 1, Q_th.label: -specific_electricity_demand}],
+ conversion_factors=[{P_el.label: -1, Q_th.label: specific_electricity_demand}],
on_off_parameters=on_off_parameters,
meta_data=meta_data,
)
@@ -361,12 +361,12 @@ def __init__(
@property
def specific_electricity_demand(self):
- return -self.conversion_factors[0][self.Q_th.label]
+ return self.conversion_factors[0][self.Q_th.label]
@specific_electricity_demand.setter
def specific_electricity_demand(self, value):
check_bounds(value, 'specific_electricity_demand', self.label_full, 0, 1)
- self.conversion_factors[0][self.Q_th.label] = -value
+ self.conversion_factors[0][self.Q_th.label] = value
@register_class_for_io
@@ -437,8 +437,8 @@ class CHP(LinearConverter):
def __init__(
self,
label: str,
- eta_th: NumericDataTS,
- eta_el: NumericDataTS,
+ eta_th: TemporalDataUser,
+ eta_el: TemporalDataUser,
Q_fu: Flow,
P_el: Flow,
Q_th: Flow,
@@ -551,7 +551,7 @@ class HeatPumpWithSource(LinearConverter):
def __init__(
self,
label: str,
- COP: NumericDataTS,
+ COP: TemporalDataUser,
P_el: Flow,
Q_ab: Flow,
Q_th: Flow,
@@ -589,11 +589,11 @@ def COP(self, value): # noqa: N802
def check_bounds(
- value: NumericDataTS,
+ value: TemporalDataUser,
parameter_label: str,
element_label: str,
- lower_bound: NumericDataTS,
- upper_bound: NumericDataTS,
+ lower_bound: TemporalDataUser,
+ upper_bound: TemporalDataUser,
) -> None:
"""
Check if the value is within the bounds. The bounds are exclusive.
diff --git a/flixopt/modeling.py b/flixopt/modeling.py
new file mode 100644
index 000000000..88e652bc9
--- /dev/null
+++ b/flixopt/modeling.py
@@ -0,0 +1,759 @@
+import logging
+
+import linopy
+import numpy as np
+import xarray as xr
+
+from .config import CONFIG
+from .core import TemporalData
+from .structure import Submodel
+
+logger = logging.getLogger('flixopt')
+
+
+class ModelingUtilitiesAbstract:
+ """Utility functions for modeling calculations - leveraging xarray for temporal data"""
+
+ @staticmethod
+ def to_binary(
+ values: xr.DataArray,
+ epsilon: float | None = None,
+ dims: str | list[str] | None = None,
+ ) -> xr.DataArray:
+ """
+ Converts a DataArray to binary {0, 1} values.
+
+ Args:
+ values: Input DataArray to convert to binary
+ epsilon: Tolerance for zero detection (uses CONFIG.Modeling.epsilon if None)
+ dims: Dims to keep. Other dimensions are collapsed using .any() -> If any value is 1, all are 1.
+
+ Returns:
+ Binary DataArray with same shape (or collapsed if collapse_non_time=True)
+ """
+ if not isinstance(values, xr.DataArray):
+ values = xr.DataArray(values, dims=['time'], coords={'time': range(len(values))})
+
+ if epsilon is None:
+ epsilon = CONFIG.Modeling.epsilon
+
+ if values.size == 0:
+ return xr.DataArray(0) if values.item() < epsilon else xr.DataArray(1)
+
+ # Convert to binary states
+ binary_states = np.abs(values) >= epsilon
+
+ # Optionally collapse dimensions using .any()
+ if dims is not None:
+ dims = [dims] if isinstance(dims, str) else dims
+
+ binary_states = binary_states.any(dim=[d for d in binary_states.dims if d not in dims])
+
+ return binary_states.astype(int)
+
+ @staticmethod
+ def count_consecutive_states(
+ binary_values: xr.DataArray | np.ndarray | list[int, float],
+ dim: str = 'time',
+ epsilon: float | None = None,
+ ) -> float:
+ """Count consecutive steps in the final active state of a binary time series.
+
+ This function counts how many consecutive time steps the series remains "on"
+ (non-zero) at the end of the time series. If the final state is "off", returns 0.
+
+ Args:
+ binary_values: Binary DataArray with values close to 0 (off) or 1 (on).
+ dim: Dimension along which to count consecutive states.
+ epsilon: Tolerance for zero detection. Uses CONFIG.Modeling.epsilon if None.
+
+ Returns:
+ Sum of values in the final consecutive "on" period. Returns 0.0 if the
+ final state is "off".
+
+ Examples:
+ >>> arr = xr.DataArray([0, 0, 1, 1, 1, 0, 1, 1], dims=['time'])
+ >>> ModelingUtilitiesAbstract.count_consecutive_states(arr)
+ 2.0
+
+ >>> arr = [0, 0, 1, 0, 1, 1, 1, 1]
+ >>> ModelingUtilitiesAbstract.count_consecutive_states(arr)
+ 4.0
+ """
+ epsilon = epsilon or CONFIG.Modeling.epsilon
+
+ if isinstance(binary_values, xr.DataArray):
+ # xarray path
+ other_dims = [d for d in binary_values.dims if d != dim]
+ if other_dims:
+ binary_values = binary_values.any(dim=other_dims)
+ arr = binary_values.values
+ else:
+ # numpy/array-like path
+ arr = np.asarray(binary_values)
+
+ # Flatten to 1D if needed
+ arr = arr.ravel() if arr.ndim > 1 else arr
+
+ # Handle edge cases
+ if arr.size == 0:
+ return 0.0
+ if arr.size == 1:
+ return float(arr[0]) if not np.isclose(arr[0], 0, atol=epsilon) else 0.0
+
+ # Return 0 if final state is off
+ if np.isclose(arr[-1], 0, atol=epsilon):
+ return 0.0
+
+ # Find the last zero position (treat NaNs as off)
+ arr = np.nan_to_num(arr, nan=0.0)
+ is_zero = np.isclose(arr, 0, atol=epsilon)
+ zero_indices = np.where(is_zero)[0]
+
+ # Calculate sum from last zero to end
+ start_idx = zero_indices[-1] + 1 if zero_indices.size > 0 else 0
+
+ return float(np.sum(arr[start_idx:]))
+
+
+class ModelingUtilities:
+ @staticmethod
+ def compute_consecutive_hours_in_state(
+ binary_values: TemporalData,
+ hours_per_timestep: int | float,
+ epsilon: float = None,
+ ) -> float:
+ """
+ Computes the final consecutive duration in state 'on' (=1) in hours.
+
+ Args:
+ binary_values: Binary DataArray with 'time' dim, or scalar/array
+ hours_per_timestep: Duration of each timestep in hours
+ epsilon: Tolerance for zero detection (uses CONFIG.Modeling.epsilon if None)
+
+ Returns:
+ The duration of the final consecutive 'on' period in hours
+ """
+ if not isinstance(hours_per_timestep, (int, float)):
+ raise TypeError(f'hours_per_timestep must be a scalar, got {type(hours_per_timestep)}')
+
+ return (
+ ModelingUtilitiesAbstract.count_consecutive_states(binary_values=binary_values, epsilon=epsilon)
+ * hours_per_timestep
+ )
+
+ @staticmethod
+ def compute_previous_states(previous_values: xr.DataArray | None, epsilon: float | None = None) -> xr.DataArray:
+ return ModelingUtilitiesAbstract.to_binary(values=previous_values, epsilon=epsilon, dims='time')
+
+ @staticmethod
+ def compute_previous_on_duration(
+ previous_values: xr.DataArray, hours_per_step: xr.DataArray | float | int
+ ) -> float:
+ return (
+ ModelingUtilitiesAbstract.count_consecutive_states(ModelingUtilitiesAbstract.to_binary(previous_values))
+ * hours_per_step
+ )
+
+ @staticmethod
+ def compute_previous_off_duration(
+ previous_values: xr.DataArray, hours_per_step: xr.DataArray | float | int
+ ) -> float:
+ """
+ Compute previous consecutive 'off' duration.
+
+ Args:
+ previous_values: DataArray with 'time' dimension
+ hours_per_step: Duration of each timestep in hours
+
+ Returns:
+ Previous consecutive off duration in hours
+ """
+ if previous_values is None or previous_values.size == 0:
+ return 0.0
+
+ previous_states = ModelingUtilities.compute_previous_states(previous_values)
+ previous_off_states = 1 - previous_states
+ return ModelingUtilities.compute_consecutive_hours_in_state(previous_off_states, hours_per_step)
+
+ @staticmethod
+ def get_most_recent_state(previous_values: xr.DataArray | None) -> int:
+ """
+ Get the most recent binary state from previous values.
+
+ Args:
+ previous_values: DataArray with 'time' dimension
+
+ Returns:
+ Most recent binary state (0 or 1)
+ """
+ if previous_values is None or previous_values.size == 0:
+ return 0
+
+ previous_states = ModelingUtilities.compute_previous_states(previous_values)
+ return int(previous_states.isel(time=-1).item())
+
+
+class ModelingPrimitives:
+ """Mathematical modeling primitives returning (variables, constraints) tuples"""
+
+ @staticmethod
+ def expression_tracking_variable(
+ model: Submodel,
+ tracked_expression,
+ name: str = None,
+ short_name: str = None,
+ bounds: tuple[TemporalData, TemporalData] = None,
+ coords: str | list[str] | None = None,
+ ) -> tuple[linopy.Variable, linopy.Constraint]:
+ """
+ Creates variable that equals a given expression.
+
+ Mathematical formulation:
+ tracker = expression
+ lower ≤ tracker ≤ upper (if bounds provided)
+
+ Returns:
+ variables: {'tracker': tracker_var}
+ constraints: {'tracking': constraint}
+ """
+ if not isinstance(model, Submodel):
+ raise ValueError('ModelingPrimitives.expression_tracking_variable() can only be used with a Submodel')
+
+ if not bounds:
+ tracker = model.add_variables(name=name, coords=model.get_coords(coords), short_name=short_name)
+ else:
+ tracker = model.add_variables(
+ lower=bounds[0] if bounds[0] is not None else -np.inf,
+ upper=bounds[1] if bounds[1] is not None else np.inf,
+ name=name,
+ coords=model.get_coords(coords),
+ short_name=short_name,
+ )
+
+ # Constraint: tracker = expression
+ tracking = model.add_constraints(tracker == tracked_expression, name=name, short_name=short_name)
+
+ return tracker, tracking
+
+ @staticmethod
+ def consecutive_duration_tracking(
+ model: Submodel,
+ state_variable: linopy.Variable,
+ name: str = None,
+ short_name: str = None,
+ minimum_duration: TemporalData | None = None,
+ maximum_duration: TemporalData | None = None,
+ duration_dim: str = 'time',
+ duration_per_step: int | float | TemporalData = None,
+ previous_duration: TemporalData = 0,
+ ) -> tuple[linopy.Variable, tuple[linopy.Constraint, linopy.Constraint, linopy.Constraint]]:
+ """
+ Creates consecutive duration tracking for a binary state variable.
+
+ Mathematical formulation:
+ duration[t] ≤ state[t] * M ∀t
+ duration[t+1] ≤ duration[t] + duration_per_step[t] ∀t
+ duration[t+1] ≥ duration[t] + duration_per_step[t] + (state[t+1] - 1) * M ∀t
+ duration[0] = (duration_per_step[0] + previous_duration) * state[0]
+
+ If minimum_duration provided:
+ duration[t] ≥ (state[t-1] - state[t]) * minimum_duration[t-1] ∀t > 0
+
+ Args:
+ name: Name of the duration variable
+ state_variable: Binary state variable to track duration for
+ minimum_duration: Optional minimum consecutive duration
+ maximum_duration: Optional maximum consecutive duration
+ previous_duration: Duration from before first timestep
+
+ Returns:
+ variables: {'duration': duration_var}
+ constraints: {'ub': constraint, 'forward': constraint, 'backward': constraint, ...}
+ """
+ if not isinstance(model, Submodel):
+ raise ValueError('ModelingPrimitives.consecutive_duration_tracking() can only be used with a Submodel')
+
+ mega = duration_per_step.sum(duration_dim) + previous_duration # Big-M value
+
+ # Duration variable
+ duration = model.add_variables(
+ lower=0,
+ upper=maximum_duration if maximum_duration is not None else mega,
+ coords=state_variable.coords,
+ name=name,
+ short_name=short_name,
+ )
+
+ constraints = {}
+
+ # Upper bound: duration[t] ≤ state[t] * M
+ constraints['ub'] = model.add_constraints(duration <= state_variable * mega, name=f'{duration.name}|ub')
+
+ # Forward constraint: duration[t+1] ≤ duration[t] + duration_per_step[t]
+ constraints['forward'] = model.add_constraints(
+ duration.isel({duration_dim: slice(1, None)})
+ <= duration.isel({duration_dim: slice(None, -1)}) + duration_per_step.isel({duration_dim: slice(None, -1)}),
+ name=f'{duration.name}|forward',
+ )
+
+ # Backward constraint: duration[t+1] ≥ duration[t] + duration_per_step[t] + (state[t+1] - 1) * M
+ constraints['backward'] = model.add_constraints(
+ duration.isel({duration_dim: slice(1, None)})
+ >= duration.isel({duration_dim: slice(None, -1)})
+ + duration_per_step.isel({duration_dim: slice(None, -1)})
+ + (state_variable.isel({duration_dim: slice(1, None)}) - 1) * mega,
+ name=f'{duration.name}|backward',
+ )
+
+ # Initial condition: duration[0] = (duration_per_step[0] + previous_duration) * state[0]
+ constraints['initial'] = model.add_constraints(
+ duration.isel({duration_dim: 0})
+ == (duration_per_step.isel({duration_dim: 0}) + previous_duration) * state_variable.isel({duration_dim: 0}),
+ name=f'{duration.name}|initial',
+ )
+
+ # Minimum duration constraint if provided
+ if minimum_duration is not None:
+ constraints['lb'] = model.add_constraints(
+ duration
+ >= (
+ state_variable.isel({duration_dim: slice(None, -1)})
+ - state_variable.isel({duration_dim: slice(1, None)})
+ )
+ * minimum_duration.isel({duration_dim: slice(None, -1)}),
+ name=f'{duration.name}|lb',
+ )
+
+ # Handle initial condition for minimum duration
+ prev = (
+ float(previous_duration)
+ if not isinstance(previous_duration, xr.DataArray)
+ else float(previous_duration.max().item())
+ )
+ min0 = float(minimum_duration.isel({duration_dim: 0}).max().item())
+ if prev > 0 and prev < min0:
+ constraints['initial_lb'] = model.add_constraints(
+ state_variable.isel({duration_dim: 0}) == 1, name=f'{duration.name}|initial_lb'
+ )
+
+ variables = {'duration': duration}
+
+ return variables, constraints
+
+ @staticmethod
+ def mutual_exclusivity_constraint(
+ model: Submodel,
+ binary_variables: list[linopy.Variable],
+ tolerance: float = 1,
+ short_name: str = 'mutual_exclusivity',
+ ) -> linopy.Constraint:
+ """
+ Creates mutual exclusivity constraint for binary variables.
+
+ Mathematical formulation:
+ Σ(binary_vars[i]) ≤ tolerance ∀t
+
+ Ensures at most one binary variable can be 1 at any time.
+ Tolerance > 1.0 accounts for binary variable numerical precision.
+
+ Args:
+ binary_variables: List of binary variables that should be mutually exclusive
+ tolerance: Upper bound
+ short_name: Short name of the constraint
+
+ Returns:
+ variables: {} (no new variables created)
+ constraints: {'mutual_exclusivity': constraint}
+
+ Raises:
+ AssertionError: If fewer than 2 variables provided or variables aren't binary
+ """
+ if not isinstance(model, Submodel):
+ raise ValueError('ModelingPrimitives.mutual_exclusivity_constraint() can only be used with a Submodel')
+
+ assert len(binary_variables) >= 2, (
+ f'Mutual exclusivity requires at least 2 variables, got {len(binary_variables)}'
+ )
+
+ for var in binary_variables:
+ assert var.attrs.get('binary', False), (
+ f'Variable {var.name} must be binary for mutual exclusivity constraint'
+ )
+
+ # Create mutual exclusivity constraint
+ mutual_exclusivity = model.add_constraints(sum(binary_variables) <= tolerance, short_name=short_name)
+
+ return mutual_exclusivity
+
+
+class BoundingPatterns:
+ """High-level patterns that compose primitives and return (variables, constraints) tuples"""
+
+ @staticmethod
+ def basic_bounds(
+ model: Submodel,
+ variable: linopy.Variable,
+ bounds: tuple[TemporalData, TemporalData],
+ name: str = None,
+ ):
+ """Create simple bounds.
+ variable ∈ [lower_bound, upper_bound]
+
+ Mathematical Formulation:
+ lower_bound ≤ variable ≤ upper_bound
+
+ Args:
+ model: The optimization model instance
+ variable: Variable to be bounded
+ bounds: Tuple of (lower_bound, upper_bound) absolute bounds
+
+ Returns:
+ Tuple containing:
+ - variables (Dict): Empty dict
+ - constraints (Dict[str, linopy.Constraint]): 'ub', 'lb'
+ """
+ if not isinstance(model, Submodel):
+ raise ValueError('BoundingPatterns.basic_bounds() can only be used with a Submodel')
+
+ lower_bound, upper_bound = bounds
+ name = name or f'{variable.name}'
+
+ upper_constraint = model.add_constraints(variable <= upper_bound, name=f'{name}|ub')
+ lower_constraint = model.add_constraints(variable >= lower_bound, name=f'{name}|lb')
+
+ return [lower_constraint, upper_constraint]
+
+ @staticmethod
+ def bounds_with_state(
+ model: Submodel,
+ variable: linopy.Variable,
+ bounds: tuple[TemporalData, TemporalData],
+ variable_state: linopy.Variable,
+ name: str = None,
+ ) -> list[linopy.Constraint]:
+ """Constraint a variable to bounds, that can be escaped from to 0 by a binary variable.
+ variable ∈ {0, [max(ε, lower_bound), upper_bound]}
+
+ Mathematical Formulation:
+ - variable_state * max(ε, lower_bound) ≤ variable ≤ variable_state * upper_bound
+
+ Use Cases:
+ - Investment decisions
+ - Unit commitment (on/off states)
+
+ Args:
+ model: The optimization model instance
+ variable: Variable to be bounded
+ bounds: Tuple of (lower_bound, upper_bound) absolute bounds
+ variable_state: Binary variable controlling the bounds
+
+ Returns:
+ Tuple containing:
+ - variables (Dict): Empty dict
+ - constraints (Dict[str, linopy.Constraint]): 'ub', 'lb'
+ """
+ if not isinstance(model, Submodel):
+ raise ValueError('BoundingPatterns.bounds_with_state() can only be used with a Submodel')
+
+ lower_bound, upper_bound = bounds
+ name = name or f'{variable.name}'
+
+ if np.allclose(lower_bound, upper_bound, atol=1e-10, equal_nan=True):
+ fix_constraint = model.add_constraints(variable == variable_state * upper_bound, name=f'{name}|fix')
+ return [fix_constraint]
+
+ epsilon = np.maximum(CONFIG.Modeling.epsilon, lower_bound)
+
+ upper_constraint = model.add_constraints(variable <= variable_state * upper_bound, name=f'{name}|ub')
+ lower_constraint = model.add_constraints(variable >= variable_state * epsilon, name=f'{name}|lb')
+
+ return [lower_constraint, upper_constraint]
+
+ @staticmethod
+ def scaled_bounds(
+ model: Submodel,
+ variable: linopy.Variable,
+ scaling_variable: linopy.Variable,
+ relative_bounds: tuple[TemporalData, TemporalData],
+ name: str = None,
+ ) -> list[linopy.Constraint]:
+ """Constraint a variable by scaling bounds, dependent on another variable.
+ variable ∈ [lower_bound * scaling_variable, upper_bound * scaling_variable]
+
+ Mathematical Formulation:
+ scaling_variable * lower_factor ≤ variable ≤ scaling_variable * upper_factor
+
+ Use Cases:
+ - Flow rates bounded by equipment capacity
+ - Production levels scaled by plant size
+
+ Args:
+ model: The optimization model instance
+ variable: Variable to be bounded
+ scaling_variable: Variable that scales the bound factors
+ relative_bounds: Tuple of (lower_factor, upper_factor) relative to scaling variable
+
+ Returns:
+ Tuple containing:
+ - variables (Dict): Empty dict
+ - constraints (Dict[str, linopy.Constraint]): 'ub', 'lb'
+ """
+ if not isinstance(model, Submodel):
+ raise ValueError('BoundingPatterns.scaled_bounds() can only be used with a Submodel')
+
+ rel_lower, rel_upper = relative_bounds
+ name = name or f'{variable.name}'
+
+ if np.allclose(rel_lower, rel_upper, atol=1e-10, equal_nan=True):
+ return [model.add_constraints(variable == scaling_variable * rel_lower, name=f'{name}|fixed')]
+
+ upper_constraint = model.add_constraints(variable <= scaling_variable * rel_upper, name=f'{name}|ub')
+ lower_constraint = model.add_constraints(variable >= scaling_variable * rel_lower, name=f'{name}|lb')
+
+ return [lower_constraint, upper_constraint]
+
+ @staticmethod
+ def scaled_bounds_with_state(
+ model: Submodel,
+ variable: linopy.Variable,
+ scaling_variable: linopy.Variable,
+ relative_bounds: tuple[TemporalData, TemporalData],
+ scaling_bounds: tuple[TemporalData, TemporalData],
+ variable_state: linopy.Variable,
+ name: str = None,
+ ) -> list[linopy.Constraint]:
+ """Constraint a variable by scaling bounds with binary state control.
+
+ variable ∈ {0, [max(ε, lower_relative_bound) * scaling_variable, upper_relative_bound * scaling_variable]}
+
+ Mathematical Formulation (Big-M):
+ (variable_state - 1) * M_misc + scaling_variable * rel_lower ≤ variable ≤ scaling_variable * rel_upper
+ variable_state * big_m_lower ≤ variable ≤ variable_state * big_m_upper
+
+ Where:
+ M_misc = scaling_max * rel_lower
+ big_m_upper = scaling_max * rel_upper
+ big_m_lower = max(ε, scaling_min * rel_lower)
+
+ Args:
+ model: The optimization model instance
+ variable: Variable to be bounded
+ scaling_variable: Variable that scales the bound factors
+ relative_bounds: Tuple of (lower_factor, upper_factor) relative to scaling variable
+ scaling_bounds: Tuple of (scaling_min, scaling_max) bounds of the scaling variable
+ variable_state: Binary variable for on/off control
+ name: Optional name prefix for constraints
+
+ Returns:
+ List[linopy.Constraint]: List of constraint objects
+ """
+ if not isinstance(model, Submodel):
+ raise ValueError('BoundingPatterns.scaled_bounds_with_state() can only be used with a Submodel')
+
+ rel_lower, rel_upper = relative_bounds
+ scaling_min, scaling_max = scaling_bounds
+ name = name or f'{variable.name}'
+
+ big_m_misc = scaling_max * rel_lower
+
+ scaling_lower = model.add_constraints(
+ variable >= (variable_state - 1) * big_m_misc + scaling_variable * rel_lower, name=f'{name}|lb2'
+ )
+ scaling_upper = model.add_constraints(variable <= scaling_variable * rel_upper, name=f'{name}|ub2')
+
+ big_m_upper = rel_upper * scaling_max
+ big_m_lower = np.maximum(CONFIG.Modeling.epsilon, rel_lower * scaling_min)
+
+ binary_upper = model.add_constraints(variable_state * big_m_upper >= variable, name=f'{name}|ub1')
+ binary_lower = model.add_constraints(variable_state * big_m_lower <= variable, name=f'{name}|lb1')
+
+ return [scaling_lower, scaling_upper, binary_lower, binary_upper]
+
+ @staticmethod
+ def state_transition_bounds(
+ model: Submodel,
+ state_variable: linopy.Variable,
+ switch_on: linopy.Variable,
+ switch_off: linopy.Variable,
+ name: str,
+ previous_state=0,
+ coord: str = 'time',
+ ) -> tuple[linopy.Constraint, linopy.Constraint, linopy.Constraint]:
+ """
+ Creates switch-on/off variables with state transition logic.
+
+ Mathematical formulation:
+ switch_on[t] - switch_off[t] = state[t] - state[t-1] ∀t > 0
+ switch_on[0] - switch_off[0] = state[0] - previous_state
+ switch_on[t] + switch_off[t] ≤ 1 ∀t
+ switch_on[t], switch_off[t] ∈ {0, 1}
+
+ Returns:
+ variables: {'switch_on': binary_var, 'switch_off': binary_var}
+ constraints: {'transition': constraint, 'initial': constraint, 'mutex': constraint}
+ """
+ if not isinstance(model, Submodel):
+ raise ValueError('ModelingPrimitives.state_transition_bounds() can only be used with a Submodel')
+
+ # State transition constraints for t > 0
+ transition = model.add_constraints(
+ switch_on.isel({coord: slice(1, None)}) - switch_off.isel({coord: slice(1, None)})
+ == state_variable.isel({coord: slice(1, None)}) - state_variable.isel({coord: slice(None, -1)}),
+ name=f'{name}|transition',
+ )
+
+ # Initial state transition for t = 0
+ initial = model.add_constraints(
+ switch_on.isel({coord: 0}) - switch_off.isel({coord: 0})
+ == state_variable.isel({coord: 0}) - previous_state,
+ name=f'{name}|initial',
+ )
+
+ # At most one switch per timestep
+ mutex = model.add_constraints(switch_on + switch_off <= 1, name=f'{name}|mutex')
+
+ return transition, initial, mutex
+
+ @staticmethod
+ def continuous_transition_bounds(
+ model: Submodel,
+ continuous_variable: linopy.Variable,
+ switch_on: linopy.Variable,
+ switch_off: linopy.Variable,
+ name: str,
+ max_change: float | xr.DataArray,
+ previous_value: float | xr.DataArray = 0.0,
+ coord: str = 'time',
+ ) -> tuple[linopy.Constraint, linopy.Constraint, linopy.Constraint, linopy.Constraint]:
+ """
+ Constrains a continuous variable to only change when switch variables are active.
+
+ Mathematical formulation:
+ -max_change * (switch_on[t] + switch_off[t]) <= continuous[t] - continuous[t-1] <= max_change * (switch_on[t] + switch_off[t]) ∀t > 0
+ -max_change * (switch_on[0] + switch_off[0]) <= continuous[0] - previous_value <= max_change * (switch_on[0] + switch_off[0])
+ switch_on[t], switch_off[t] ∈ {0, 1}
+
+ This ensures the continuous variable can only change when switch_on or switch_off is 1.
+ When both switches are 0, the variable must stay exactly constant.
+
+ Args:
+ model: The submodel to add constraints to
+ continuous_variable: The continuous variable to constrain
+ switch_on: Binary variable indicating when changes are allowed (typically transitions to active state)
+ switch_off: Binary variable indicating when changes are allowed (typically transitions to inactive state)
+ name: Base name for the constraints
+ max_change: Maximum possible change in the continuous variable (Big-M value)
+ previous_value: Initial value of the continuous variable before first period
+ coord: Coordinate name for time dimension
+
+ Returns:
+ Tuple of constraints: (transition_upper, transition_lower, initial_upper, initial_lower)
+ """
+ if not isinstance(model, Submodel):
+ raise ValueError('ModelingPrimitives.continuous_transition_bounds() can only be used with a Submodel')
+
+ # Transition constraints for t > 0: continuous variable can only change when switches are active
+ transition_upper = model.add_constraints(
+ continuous_variable.isel({coord: slice(1, None)}) - continuous_variable.isel({coord: slice(None, -1)})
+ <= max_change * (switch_on.isel({coord: slice(1, None)}) + switch_off.isel({coord: slice(1, None)})),
+ name=f'{name}|transition_ub',
+ )
+
+ transition_lower = model.add_constraints(
+ -(continuous_variable.isel({coord: slice(1, None)}) - continuous_variable.isel({coord: slice(None, -1)}))
+ <= max_change * (switch_on.isel({coord: slice(1, None)}) + switch_off.isel({coord: slice(1, None)})),
+ name=f'{name}|transition_lb',
+ )
+
+ # Initial constraints for t = 0
+ initial_upper = model.add_constraints(
+ continuous_variable.isel({coord: 0}) - previous_value
+ <= max_change * (switch_on.isel({coord: 0}) + switch_off.isel({coord: 0})),
+ name=f'{name}|initial_ub',
+ )
+
+ initial_lower = model.add_constraints(
+ -continuous_variable.isel({coord: 0}) + previous_value
+ <= max_change * (switch_on.isel({coord: 0}) + switch_off.isel({coord: 0})),
+ name=f'{name}|initial_lb',
+ )
+
+ return transition_upper, transition_lower, initial_upper, initial_lower
+
+ @staticmethod
+ def link_changes_to_level_with_binaries(
+ model: Submodel,
+ level_variable: linopy.Variable,
+ increase_variable: linopy.Variable,
+ decrease_variable: linopy.Variable,
+ increase_binary: linopy.Variable,
+ decrease_binary: linopy.Variable,
+ name: str,
+ max_change: float | xr.DataArray,
+ initial_level: float | xr.DataArray = 0.0,
+ coord: str = 'period',
+ ) -> tuple[linopy.Constraint, linopy.Constraint, linopy.Constraint, linopy.Constraint, linopy.Constraint]:
+ """
+ Link changes to level evolution with binary control and mutual exclusivity.
+
+ Creates the complete constraint system for ALL time periods:
+ 1. level[0] = initial_level + increase[0] - decrease[0]
+ 2. level[t] = level[t-1] + increase[t] - decrease[t] ∀t > 0
+ 3. increase[t] <= max_change * increase_binary[t] ∀t
+ 4. decrease[t] <= max_change * decrease_binary[t] ∀t
+ 5. increase_binary[t] + decrease_binary[t] <= 1 ∀t
+
+ Args:
+ model: The submodel to add constraints to
+ increase_variable: Incremental additions for ALL periods (>= 0)
+ decrease_variable: Incremental reductions for ALL periods (>= 0)
+ increase_binary: Binary indicators for increases for ALL periods
+ decrease_binary: Binary indicators for decreases for ALL periods
+ level_variable: Level variable for ALL periods
+ name: Base name for constraints
+ max_change: Maximum change per period
+ initial_level: Starting level before first period
+ coord: Time coordinate name
+
+ Returns:
+ Tuple of (initial_constraint, transition_constraints, increase_bounds, decrease_bounds, mutual_exclusion)
+ """
+ if not isinstance(model, Submodel):
+ raise ValueError('BoundingPatterns.link_changes_to_level_with_binaries() can only be used with a Submodel')
+
+ # 1. Initial period: level[0] - initial_level = increase[0] - decrease[0]
+ initial_constraint = model.add_constraints(
+ level_variable.isel({coord: 0}) - initial_level
+ == increase_variable.isel({coord: 0}) - decrease_variable.isel({coord: 0}),
+ name=f'{name}|initial_level',
+ )
+
+ # 2. Transition periods: level[t] = level[t-1] + increase[t] - decrease[t] for t > 0
+ transition_constraints = model.add_constraints(
+ level_variable.isel({coord: slice(1, None)})
+ == level_variable.isel({coord: slice(None, -1)})
+ + increase_variable.isel({coord: slice(1, None)})
+ - decrease_variable.isel({coord: slice(1, None)}),
+ name=f'{name}|transitions',
+ )
+
+ # 3. Increase bounds: increase[t] <= max_change * increase_binary[t] for all t
+ increase_bounds = model.add_constraints(
+ increase_variable <= increase_binary * max_change,
+ name=f'{name}|increase_bounds',
+ )
+
+ # 4. Decrease bounds: decrease[t] <= max_change * decrease_binary[t] for all t
+ decrease_bounds = model.add_constraints(
+ decrease_variable <= decrease_binary * max_change,
+ name=f'{name}|decrease_bounds',
+ )
+
+ # 5. Mutual exclusivity: increase_binary[t] + decrease_binary[t] <= 1 for all t
+ mutual_exclusion = model.add_constraints(
+ increase_binary + decrease_binary <= 1,
+ name=f'{name}|mutual_exclusion',
+ )
+
+ return initial_constraint, transition_constraints, increase_bounds, decrease_bounds, mutual_exclusion
diff --git a/flixopt/plotting.py b/flixopt/plotting.py
index 950172c6a..356f013c0 100644
--- a/flixopt/plotting.py
+++ b/flixopt/plotting.py
@@ -27,9 +27,11 @@
import itertools
import logging
+import os
import pathlib
from typing import TYPE_CHECKING, Any, Literal
+import matplotlib
import matplotlib.colors as mcolors
import matplotlib.pyplot as plt
import numpy as np
@@ -200,7 +202,7 @@ def _generate_colors_from_colormap(self, colormap_name: str, num_colors: int) ->
try:
colorscale = px.colors.get_colorscale(colormap_name)
except PlotlyError as e:
- logger.warning(f"Colorscale '{colormap_name}' not found in Plotly. Using {self.default_colormap}: {e}")
+ logger.error(f"Colorscale '{colormap_name}' not found in Plotly. Using {self.default_colormap}: {e}")
colorscale = px.colors.get_colorscale(self.default_colormap)
# Generate evenly spaced points
@@ -211,9 +213,7 @@ def _generate_colors_from_colormap(self, colormap_name: str, num_colors: int) ->
try:
cmap = plt.get_cmap(colormap_name, num_colors)
except ValueError as e:
- logger.warning(
- f"Colormap '{colormap_name}' not found in Matplotlib. Using {self.default_colormap}: {e}"
- )
+ logger.error(f"Colormap '{colormap_name}' not found in Matplotlib. Using {self.default_colormap}: {e}")
cmap = plt.get_cmap(self.default_colormap, num_colors)
return [cmap(i) for i in range(num_colors)]
@@ -230,7 +230,7 @@ def _handle_color_list(self, colors: list[str], num_labels: int) -> list[str]:
list of colors matching the number of labels
"""
if len(colors) == 0:
- logger.warning(f'Empty color list provided. Using {self.default_colormap} instead.')
+ logger.error(f'Empty color list provided. Using {self.default_colormap} instead.')
return self._generate_colors_from_colormap(self.default_colormap, num_labels)
if len(colors) < num_labels:
@@ -302,7 +302,7 @@ def process_colors(
Either a list of colors or a dictionary mapping labels to colors
"""
if len(labels) == 0:
- logger.warning('No labels provided for color assignment.')
+ logger.error('No labels provided for color assignment.')
return {} if return_mapping else []
# Process based on type of colors input
@@ -313,7 +313,7 @@ def process_colors(
elif isinstance(colors, dict):
color_list = self._handle_color_dict(colors, labels)
else:
- logger.warning(
+ logger.error(
f'Unsupported color specification type: {type(colors)}. Using {self.default_colormap} instead.'
)
color_list = self._generate_colors_from_colormap(self.default_colormap, len(labels))
@@ -327,7 +327,7 @@ def process_colors(
def with_plotly(
data: pd.DataFrame,
- mode: Literal['bar', 'line', 'area'] = 'area',
+ style: Literal['stacked_bar', 'line', 'area', 'grouped_bar'] = 'stacked_bar',
colors: ColorType = 'viridis',
title: str = '',
ylabel: str = '',
@@ -340,7 +340,7 @@ def with_plotly(
Args:
data: A DataFrame containing the data to plot, where the index represents time (e.g., hours),
and each column represents a separate data series.
- mode: The plotting mode. Use 'bar' for stacked bar charts, 'line' for stepped lines,
+ style: The plotting style. Use 'stacked_bar' for stacked bar charts, 'line' for stepped lines,
or 'area' for stacked area charts.
colors: Color specification, can be:
- A string with a colorscale name (e.g., 'viridis', 'plasma')
@@ -354,8 +354,8 @@ def with_plotly(
Returns:
A Plotly figure object containing the generated plot.
"""
- if mode not in ('bar', 'line', 'area'):
- raise ValueError(f"'mode' must be one of {{'bar','line','area'}}, got {mode!r}")
+ if style not in ('stacked_bar', 'line', 'area', 'grouped_bar'):
+ raise ValueError(f"'style' must be one of {{'stacked_bar','line','area', 'grouped_bar'}}, got {style!r}")
if data.empty:
return go.Figure()
@@ -363,23 +363,34 @@ def with_plotly(
fig = fig if fig is not None else go.Figure()
- if mode == 'bar':
+ if style == 'stacked_bar':
for i, column in enumerate(data.columns):
fig.add_trace(
go.Bar(
x=data.index,
y=data[column],
name=column,
- marker=dict(color=processed_colors[i]),
+ marker=dict(
+ color=processed_colors[i], line=dict(width=0, color='rgba(0,0,0,0)')
+ ), # Transparent line with 0 width
)
)
fig.update_layout(
- barmode='relative' if mode == 'bar' else None,
+ barmode='relative',
bargap=0, # No space between bars
- bargroupgap=0, # No space between groups of bars
+ bargroupgap=0, # No space between grouped bars
)
- elif mode == 'line':
+ if style == 'grouped_bar':
+ for i, column in enumerate(data.columns):
+ fig.add_trace(go.Bar(x=data.index, y=data[column], name=column, marker=dict(color=processed_colors[i])))
+
+ fig.update_layout(
+ barmode='group',
+ bargap=0.2, # No space between bars
+ bargroupgap=0, # space between grouped bars
+ )
+ elif style == 'line':
for i, column in enumerate(data.columns):
fig.add_trace(
go.Scatter(
@@ -390,7 +401,7 @@ def with_plotly(
line=dict(shape='hv', color=processed_colors[i]),
)
)
- elif mode == 'area':
+ elif style == 'area':
data = data.copy()
data[(data > -1e-5) & (data < 1e-5)] = 0 # Preventing issues with plotting
# Split columns into positive, negative, and mixed categories
@@ -400,7 +411,7 @@ def with_plotly(
mixed_columns = list(set(data.columns) - set(positive_columns + negative_columns))
if mixed_columns:
- logger.warning(
+ logger.error(
f'Data for plotting stacked lines contains columns with both positive and negative values:'
f' {mixed_columns}. These can not be stacked, and are printed as simple lines'
)
@@ -450,14 +461,6 @@ def with_plotly(
plot_bgcolor='rgba(0,0,0,0)', # Transparent background
paper_bgcolor='rgba(0,0,0,0)', # Transparent paper background
font=dict(size=14), # Increase font size for better readability
- legend=dict(
- orientation='h', # Horizontal legend
- yanchor='bottom',
- y=-0.3, # Adjusts how far below the plot it appears
- xanchor='center',
- x=0.5,
- title_text=None, # Removes legend title for a cleaner look
- ),
)
return fig
@@ -465,7 +468,7 @@ def with_plotly(
def with_matplotlib(
data: pd.DataFrame,
- mode: Literal['bar', 'line'] = 'bar',
+ style: Literal['stacked_bar', 'line'] = 'stacked_bar',
colors: ColorType = 'viridis',
title: str = '',
ylabel: str = '',
@@ -480,7 +483,7 @@ def with_matplotlib(
Args:
data: A DataFrame containing the data to plot. The index should represent time (e.g., hours),
and each column represents a separate data series.
- mode: Plotting mode. Use 'bar' for stacked bar charts or 'line' for stepped lines.
+ style: Plotting style. Use 'stacked_bar' for stacked bar charts or 'line' for stepped lines.
colors: Color specification, can be:
- A string with a colormap name (e.g., 'viridis', 'plasma')
- A list of color strings (e.g., ['#ff0000', '#00ff00'])
@@ -496,20 +499,19 @@ def with_matplotlib(
A tuple containing the Matplotlib figure and axes objects used for the plot.
Notes:
- - If `mode` is 'bar', bars are stacked for both positive and negative values.
+ - If `style` is 'stacked_bar', bars are stacked for both positive and negative values.
Negative values are stacked separately without extra labels in the legend.
- - If `mode` is 'line', stepped lines are drawn for each data series.
- - The legend is placed below the plot to accommodate multiple data series.
+ - If `style` is 'line', stepped lines are drawn for each data series.
"""
- if mode not in ('bar', 'line'):
- raise ValueError(f"'mode' must be one of {{'bar','line'}} for matplotlib, got {mode!r}")
+ if style not in ('stacked_bar', 'line'):
+ raise ValueError(f"'style' must be one of {{'stacked_bar','line'}} for matplotlib, got {style!r}")
if fig is None or ax is None:
fig, ax = plt.subplots(figsize=figsize)
processed_colors = ColorProcessor(engine='matplotlib').process_colors(colors, list(data.columns))
- if mode == 'bar':
+ if style == 'stacked_bar':
cumulative_positive = np.zeros(len(data))
cumulative_negative = np.zeros(len(data))
width = data.index.to_series().diff().dropna().min() # Minimum time difference
@@ -540,7 +542,7 @@ def with_matplotlib(
)
cumulative_negative += negative_values.values
- elif mode == 'line':
+ elif style == 'line':
for i, column in enumerate(data.columns):
ax.step(data.index, data[column], where='post', color=processed_colors[i], label=column)
@@ -780,7 +782,7 @@ def heat_map_data_from_df(
minimum_time_diff_in_min = diffs.min().total_seconds() / 60
time_intervals = {'min': 1, '15min': 15, 'h': 60, 'D': 24 * 60, 'W': 7 * 24 * 60}
if time_intervals[steps_per_period] > minimum_time_diff_in_min:
- logger.warning(
+ logger.error(
f'To compute the heatmap, the data was aggregated from {minimum_time_diff_in_min:.2f} min to '
f'{time_intervals[steps_per_period]:.2f} min. Mean values are displayed.'
)
@@ -890,11 +892,9 @@ def plot_network(
worked = webbrowser.open(f'file://{path.resolve()}', 2)
if not worked:
- logger.warning(
- f'Showing the network in the Browser went wrong. Open it manually. Its saved under {path}'
- )
+ logger.error(f'Showing the network in the Browser went wrong. Open it manually. Its saved under {path}')
except Exception as e:
- logger.warning(
+ logger.error(
f'Showing the network in the Browser went wrong. Open it manually. Its saved under {path}: {e}'
)
@@ -933,7 +933,7 @@ def pie_with_plotly(
"""
if data.empty:
- logger.warning('Empty DataFrame provided for pie chart. Returning empty figure.')
+ logger.error('Empty DataFrame provided for pie chart. Returning empty figure.')
return go.Figure()
# Create a copy to avoid modifying the original DataFrame
@@ -941,7 +941,7 @@ def pie_with_plotly(
# Check if any negative values and warn
if (data_copy < 0).any().any():
- logger.warning('Negative values detected in data. Using absolute values for pie chart.')
+ logger.error('Negative values detected in data. Using absolute values for pie chart.')
data_copy = data_copy.abs()
# If data has multiple rows, sum them to get total for each column
@@ -1023,7 +1023,7 @@ def pie_with_matplotlib(
"""
if data.empty:
- logger.warning('Empty DataFrame provided for pie chart. Returning empty figure.')
+ logger.error('Empty DataFrame provided for pie chart. Returning empty figure.')
if fig is None or ax is None:
fig, ax = plt.subplots(figsize=figsize)
return fig, ax
@@ -1033,7 +1033,7 @@ def pie_with_matplotlib(
# Check if any negative values and warn
if (data_copy < 0).any().any():
- logger.warning('Negative values detected in data. Using absolute values for pie chart.')
+ logger.error('Negative values detected in data. Using absolute values for pie chart.')
data_copy = data_copy.abs()
# If data has multiple rows, sum them to get total for each column
@@ -1138,7 +1138,7 @@ def dual_pie_with_plotly(
# Check for empty data
if data_left.empty and data_right.empty:
- logger.warning('Both datasets are empty. Returning empty figure.')
+ logger.error('Both datasets are empty. Returning empty figure.')
return go.Figure()
# Create a subplot figure
@@ -1161,7 +1161,7 @@ def preprocess_series(series: pd.Series):
"""
# Handle negative values
if (series < 0).any():
- logger.warning('Negative values detected in data. Using absolute values for pie chart.')
+ logger.error('Negative values detected in data. Using absolute values for pie chart.')
series = series.abs()
# Remove zeros
@@ -1246,7 +1246,6 @@ def create_pie_trace(data_series, side):
paper_bgcolor='rgba(0,0,0,0)', # Transparent paper background
font=dict(size=14),
margin=dict(t=80, b=50, l=30, r=30),
- legend=dict(orientation='h', yanchor='bottom', y=-0.2, xanchor='center', x=0.5, font=dict(size=12)),
)
return fig
@@ -1290,7 +1289,7 @@ def dual_pie_with_matplotlib(
"""
# Check for empty data
if data_left.empty and data_right.empty:
- logger.warning('Both datasets are empty. Returning empty figure.')
+ logger.error('Both datasets are empty. Returning empty figure.')
if fig is None:
fig, axes = plt.subplots(1, 2, figsize=figsize)
return fig, axes
@@ -1308,7 +1307,7 @@ def preprocess_series(series: pd.Series):
"""
# Handle negative values
if (series < 0).any():
- logger.warning('Negative values detected in data. Using absolute values for pie chart.')
+ logger.error('Negative values detected in data. Using absolute values for pie chart.')
series = series.abs()
# Remove zeros
@@ -1449,20 +1448,49 @@ def export_figure(
if filename.suffix != '.html':
logger.warning(f'To save a Plotly figure, using .html. Adjusting suffix for {filename}')
filename = filename.with_suffix('.html')
- if show and not save:
- fig.show()
- elif save and show:
- plotly.offline.plot(fig, filename=str(filename))
- elif save and not show:
- fig.write_html(str(filename))
+
+ try:
+ is_test_env = 'PYTEST_CURRENT_TEST' in os.environ
+
+ if is_test_env:
+ # Test environment: never open browser, only save if requested
+ if save:
+ fig.write_html(str(filename))
+ # Ignore show flag in tests
+ else:
+ # Production environment: respect show and save flags
+ if save and show:
+ # Save and auto-open in browser
+ plotly.offline.plot(fig, filename=str(filename))
+ elif save and not show:
+ # Save without opening
+ fig.write_html(str(filename))
+ elif show and not save:
+ # Show interactively without saving
+ fig.show()
+ # If neither save nor show: do nothing
+ finally:
+ # Cleanup to prevent socket warnings
+ if hasattr(fig, '_renderer'):
+ fig._renderer = None
+
return figure_like
elif isinstance(figure_like, tuple):
fig, ax = figure_like
if show:
- fig.show()
+ # Only show if using interactive backend and not in test environment
+ backend = matplotlib.get_backend().lower()
+ is_interactive = backend not in {'agg', 'pdf', 'ps', 'svg', 'template'}
+ is_test_env = 'PYTEST_CURRENT_TEST' in os.environ
+
+ if is_interactive and not is_test_env:
+ plt.show()
+
if save:
fig.savefig(str(filename), dpi=300)
+ plt.close(fig) # Close figure to free memory
+
return fig, ax
raise TypeError(f'Figure type not supported: {type(figure_like)}')
diff --git a/flixopt/results.py b/flixopt/results.py
index d93285d2c..e571bc558 100644
--- a/flixopt/results.py
+++ b/flixopt/results.py
@@ -4,7 +4,8 @@
import json
import logging
import pathlib
-from typing import TYPE_CHECKING, Literal
+import warnings
+from typing import TYPE_CHECKING, Any, Literal
import linopy
import numpy as np
@@ -15,18 +16,25 @@
from . import io as fx_io
from . import plotting
-from .core import TimeSeriesCollection
+from .flow_system import FlowSystem
if TYPE_CHECKING:
import matplotlib.pyplot as plt
import pyvis
from .calculation import Calculation, SegmentedCalculation
+ from .core import FlowSystemDimensions
logger = logging.getLogger('flixopt')
+class _FlowSystemRestorationError(Exception):
+ """Exception raised when a FlowSystem cannot be restored from dataset."""
+
+ pass
+
+
class CalculationResults:
"""Comprehensive container for optimization calculation results and analysis tools.
@@ -51,7 +59,7 @@ class CalculationResults:
Attributes:
solution: Dataset containing all optimization variable solutions
- flow_system: Dataset with complete system configuration and parameters. Restore the used FlowSystem for further analysis.
+ flow_system_data: Dataset with complete system configuration and parameters. Restore the used FlowSystem for further analysis.
summary: Calculation metadata including solver status, timing, and statistics
name: Unique identifier for this calculation
model: Original linopy optimization model (if available)
@@ -134,7 +142,7 @@ def from_file(cls, folder: str | pathlib.Path, name: str) -> CalculationResults:
return cls(
solution=fx_io.load_dataset_from_netcdf(paths.solution),
- flow_system=fx_io.load_dataset_from_netcdf(paths.flow_system),
+ flow_system_data=fx_io.load_dataset_from_netcdf(paths.flow_system),
name=name,
folder=folder,
model=model,
@@ -153,7 +161,7 @@ def from_calculation(cls, calculation: Calculation) -> CalculationResults:
"""
return cls(
solution=calculation.model.solution,
- flow_system=calculation.flow_system.as_dataset(constants_in_dataset=True),
+ flow_system_data=calculation.flow_system.to_dataset(),
summary=calculation.summary,
model=calculation.model,
name=calculation.name,
@@ -163,41 +171,73 @@ def from_calculation(cls, calculation: Calculation) -> CalculationResults:
def __init__(
self,
solution: xr.Dataset,
- flow_system: xr.Dataset,
+ flow_system_data: xr.Dataset,
name: str,
summary: dict,
folder: pathlib.Path | None = None,
model: linopy.Model | None = None,
+ **kwargs, # To accept old "flow_system" parameter
):
"""Initialize CalculationResults with optimization data.
Usually, this class is instantiated by the Calculation class, or by loading from file.
Args:
solution: Optimization solution dataset.
- flow_system: Flow system configuration dataset.
+ flow_system_data: Flow system configuration dataset.
name: Calculation name.
summary: Calculation metadata.
folder: Results storage folder.
model: Linopy optimization model.
+ Deprecated:
+ flow_system: Use flow_system_data instead.
"""
+ # Handle potential old "flow_system" parameter for backward compatibility
+ if 'flow_system' in kwargs and flow_system_data is None:
+ flow_system_data = kwargs.pop('flow_system')
+ warnings.warn(
+ "The 'flow_system' parameter is deprecated. Use 'flow_system_data' instead."
+ "Acess is now by '.flow_system_data', while '.flow_system' returns the restored FlowSystem.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+
self.solution = solution
- self.flow_system = flow_system
+ self.flow_system_data = flow_system_data
self.summary = summary
self.name = name
self.model = model
self.folder = pathlib.Path(folder) if folder is not None else pathlib.Path.cwd() / 'results'
self.components = {
- label: ComponentResults.from_json(self, infos) for label, infos in self.solution.attrs['Components'].items()
+ label: ComponentResults(self, **infos) for label, infos in self.solution.attrs['Components'].items()
}
- self.buses = {label: BusResults.from_json(self, infos) for label, infos in self.solution.attrs['Buses'].items()}
+ self.buses = {label: BusResults(self, **infos) for label, infos in self.solution.attrs['Buses'].items()}
- self.effects = {
- label: EffectResults.from_json(self, infos) for label, infos in self.solution.attrs['Effects'].items()
- }
+ self.effects = {label: EffectResults(self, **infos) for label, infos in self.solution.attrs['Effects'].items()}
+
+ if 'Flows' not in self.solution.attrs:
+ warnings.warn(
+ 'No Data about flows found in the results. This data is only included since v2.2.0. Some functionality '
+ 'is not availlable. We recommend to evaluate your results with a version <2.2.0.',
+ stacklevel=2,
+ )
+ self.flows = {}
+ else:
+ self.flows = {
+ label: FlowResults(self, **infos) for label, infos in self.solution.attrs.get('Flows', {}).items()
+ }
self.timesteps_extra = self.solution.indexes['time']
- self.hours_per_timestep = TimeSeriesCollection.calculate_hours_per_timestep(self.timesteps_extra)
+ self.hours_per_timestep = FlowSystem.calculate_hours_per_timestep(self.timesteps_extra)
+ self.scenarios = self.solution.indexes['scenario'] if 'scenario' in self.solution.indexes else None
+
+ self._effect_share_factors = None
+ self._flow_system = None
+
+ self._flow_rates = None
+ self._flow_hours = None
+ self._sizes = None
+ self._effects_per_component = None
def __getitem__(self, key: str) -> ComponentResults | BusResults | EffectResults:
if key in self.components:
@@ -206,6 +246,8 @@ def __getitem__(self, key: str) -> ComponentResults | BusResults | EffectResults
return self.buses[key]
if key in self.effects:
return self.effects[key]
+ if key in self.flows:
+ return self.flows[key]
raise KeyError(f'No element with label {key} found.')
@property
@@ -216,7 +258,12 @@ def storages(self) -> list[ComponentResults]:
@property
def objective(self) -> float:
"""Get optimization objective value."""
- return self.summary['Main Results']['Objective']
+ # Deprecated. Fallback
+ if 'objective' not in self.solution:
+ logger.warning('Objective not found in solution. Fallback to summary (rounded value). This is deprecated')
+ return self.summary['Main Results']['Objective']
+
+ return self.solution['objective'].item()
@property
def variables(self) -> linopy.Variables:
@@ -232,21 +279,411 @@ def constraints(self) -> linopy.Constraints:
raise ValueError('The linopy model is not available.')
return self.model.constraints
+ @property
+ def effect_share_factors(self):
+ if self._effect_share_factors is None:
+ effect_share_factors = self.flow_system.effects.calculate_effect_share_factors()
+ self._effect_share_factors = {'temporal': effect_share_factors[0], 'periodic': effect_share_factors[1]}
+ return self._effect_share_factors
+
+ @property
+ def flow_system(self) -> FlowSystem:
+ """The restored flow_system that was used to create the calculation.
+ Contains all input parameters."""
+ if self._flow_system is None:
+ old_level = logger.level
+ logger.level = logging.CRITICAL
+ try:
+ self._flow_system = FlowSystem.from_dataset(self.flow_system_data)
+ self._flow_system._connect_network()
+ except Exception as e:
+ logger.critical(
+ f'Not able to restore FlowSystem from dataset. Some functionality is not availlable. {e}'
+ )
+ raise _FlowSystemRestorationError(f'Not able to restore FlowSystem from dataset. {e}') from e
+ finally:
+ logger.level = old_level
+ return self._flow_system
+
def filter_solution(
- self, variable_dims: Literal['scalar', 'time'] | None = None, element: str | None = None
+ self,
+ variable_dims: Literal['scalar', 'time', 'scenario', 'timeonly', 'scenarioonly'] | None = None,
+ element: str | None = None,
+ timesteps: pd.DatetimeIndex | None = None,
+ scenarios: pd.Index | None = None,
+ contains: str | list[str] | None = None,
+ startswith: str | list[str] | None = None,
) -> xr.Dataset:
"""Filter solution by variable dimension and/or element.
Args:
- variable_dims: Variable dimension to filter ('scalar' or 'time').
- element: Element label to filter.
+ variable_dims: The dimension of which to get variables from.
+ - 'scalar': Get scalar variables (without dimensions)
+ - 'time': Get time-dependent variables (with a time dimension)
+ - 'scenario': Get scenario-dependent variables (with ONLY a scenario dimension)
+ - 'timeonly': Get time-dependent variables (with ONLY a time dimension)
+ - 'scenarioonly': Get scenario-dependent variables (with ONLY a scenario dimension)
+ element: The element to filter for.
+ timesteps: Optional time indexes to select. Can be:
+ - pd.DatetimeIndex: Multiple timesteps
+ - str/pd.Timestamp: Single timestep
+ Defaults to all available timesteps.
+ scenarios: Optional scenario indexes to select. Can be:
+ - pd.Index: Multiple scenarios
+ - str/int: Single scenario (int is treated as a label, not an index position)
+ Defaults to all available scenarios.
+ contains: Filter variables that contain this string or strings.
+ If a list is provided, variables must contain ALL strings in the list.
+ startswith: Filter variables that start with this string or strings.
+ If a list is provided, variables must start with ANY of the strings in the list.
+ """
+ return filter_dataset(
+ self.solution if element is None else self[element].solution,
+ variable_dims=variable_dims,
+ timesteps=timesteps,
+ scenarios=scenarios,
+ contains=contains,
+ startswith=startswith,
+ )
+
+ @property
+ def effects_per_component(self) -> xr.Dataset:
+ """Returns a dataset containing effect results for each mode, aggregated by Component
+
+ Returns:
+ An xarray Dataset with an additional component dimension and effects as variables.
+ """
+ if self._effects_per_component is None:
+ self._effects_per_component = xr.Dataset(
+ {
+ mode: self._create_effects_dataset(mode).to_dataarray('effect', name=mode)
+ for mode in ['temporal', 'periodic', 'total']
+ }
+ )
+ dim_order = ['time', 'period', 'scenario', 'component', 'effect']
+ self._effects_per_component = self._effects_per_component.transpose(*dim_order, missing_dims='ignore')
+
+ return self._effects_per_component
+
+ def flow_rates(
+ self,
+ start: str | list[str] | None = None,
+ end: str | list[str] | None = None,
+ component: str | list[str] | None = None,
+ ) -> xr.DataArray:
+ """Returns a DataArray containing the flow rates of each Flow.
+
+ Args:
+ start: Optional source node(s) to filter by. Can be a single node name or a list of names.
+ end: Optional destination node(s) to filter by. Can be a single node name or a list of names.
+ component: Optional component(s) to filter by. Can be a single component name or a list of names.
+
+ Further usage:
+ Convert the dataarray to a dataframe:
+ >>>results.flow_rates().to_pandas()
+ Get the max or min over time:
+ >>>results.flow_rates().max('time')
+ Sum up the flow rates of flows with the same start and end:
+ >>>results.flow_rates(end='Fernwärme').groupby('start').sum(dim='flow')
+ To recombine filtered dataarrays, use `xr.concat` with dim 'flow':
+ >>>xr.concat([results.flow_rates(start='Fernwärme'), results.flow_rates(end='Fernwärme')], dim='flow')
+ """
+ if self._flow_rates is None:
+ self._flow_rates = self._assign_flow_coords(
+ xr.concat(
+ [flow.flow_rate.rename(flow.label) for flow in self.flows.values()],
+ dim=pd.Index(self.flows.keys(), name='flow'),
+ )
+ ).rename('flow_rates')
+ filters = {k: v for k, v in {'start': start, 'end': end, 'component': component}.items() if v is not None}
+ return filter_dataarray_by_coord(self._flow_rates, **filters)
+
+ def flow_hours(
+ self,
+ start: str | list[str] | None = None,
+ end: str | list[str] | None = None,
+ component: str | list[str] | None = None,
+ ) -> xr.DataArray:
+ """Returns a DataArray containing the flow hours of each Flow.
+
+ Flow hours represent the total energy/material transferred over time,
+ calculated by multiplying flow rates by the duration of each timestep.
+
+ Args:
+ start: Optional source node(s) to filter by. Can be a single node name or a list of names.
+ end: Optional destination node(s) to filter by. Can be a single node name or a list of names.
+ component: Optional component(s) to filter by. Can be a single component name or a list of names.
+
+ Further usage:
+ Convert the dataarray to a dataframe:
+ >>>results.flow_hours().to_pandas()
+ Sum up the flow hours over time:
+ >>>results.flow_hours().sum('time')
+ Sum up the flow hours of flows with the same start and end:
+ >>>results.flow_hours(end='Fernwärme').groupby('start').sum(dim='flow')
+ To recombine filtered dataarrays, use `xr.concat` with dim 'flow':
+ >>>xr.concat([results.flow_hours(start='Fernwärme'), results.flow_hours(end='Fernwärme')], dim='flow')
+
+ """
+ if self._flow_hours is None:
+ self._flow_hours = (self.flow_rates() * self.hours_per_timestep).rename('flow_hours')
+ filters = {k: v for k, v in {'start': start, 'end': end, 'component': component}.items() if v is not None}
+ return filter_dataarray_by_coord(self._flow_hours, **filters)
+
+ def sizes(
+ self,
+ start: str | list[str] | None = None,
+ end: str | list[str] | None = None,
+ component: str | list[str] | None = None,
+ ) -> xr.DataArray:
+ """Returns a dataset with the sizes of the Flows.
+ Args:
+ start: Optional source node(s) to filter by. Can be a single node name or a list of names.
+ end: Optional destination node(s) to filter by. Can be a single node name or a list of names.
+ component: Optional component(s) to filter by. Can be a single component name or a list of names.
+
+ Further usage:
+ Convert the dataarray to a dataframe:
+ >>>results.sizes().to_pandas()
+ To recombine filtered dataarrays, use `xr.concat` with dim 'flow':
+ >>>xr.concat([results.sizes(start='Fernwärme'), results.sizes(end='Fernwärme')], dim='flow')
+
+ """
+ if self._sizes is None:
+ self._sizes = self._assign_flow_coords(
+ xr.concat(
+ [flow.size.rename(flow.label) for flow in self.flows.values()],
+ dim=pd.Index(self.flows.keys(), name='flow'),
+ )
+ ).rename('flow_sizes')
+ filters = {k: v for k, v in {'start': start, 'end': end, 'component': component}.items() if v is not None}
+ return filter_dataarray_by_coord(self._sizes, **filters)
+
+ def _assign_flow_coords(self, da: xr.DataArray):
+ # Add start and end coordinates
+ da = da.assign_coords(
+ {
+ 'start': ('flow', [flow.start for flow in self.flows.values()]),
+ 'end': ('flow', [flow.end for flow in self.flows.values()]),
+ 'component': ('flow', [flow.component for flow in self.flows.values()]),
+ }
+ )
+
+ # Ensure flow is the last dimension if needed
+ existing_dims = [d for d in da.dims if d != 'flow']
+ da = da.transpose(*(existing_dims + ['flow']))
+ return da
+
+ def get_effect_shares(
+ self,
+ element: str,
+ effect: str,
+ mode: Literal['temporal', 'periodic'] | None = None,
+ include_flows: bool = False,
+ ) -> xr.Dataset:
+ """Retrieves individual effect shares for a specific element and effect.
+ Either for temporal, investment, or both modes combined.
+ Only includes the direct shares.
+
+ Args:
+ element: The element identifier for which to retrieve effect shares.
+ effect: The effect identifier for which to retrieve shares.
+ mode: Optional. The mode to retrieve shares for. Can be 'temporal', 'periodic',
+ or None to retrieve both. Defaults to None.
+
+ Returns:
+ An xarray Dataset containing the requested effect shares. If mode is None,
+ returns a merged Dataset containing both temporal and investment shares.
+
+ Raises:
+ ValueError: If the specified effect is not available or if mode is invalid.
+ """
+ if effect not in self.effects:
+ raise ValueError(f'Effect {effect} is not available.')
+
+ if mode is None:
+ return xr.merge(
+ [
+ self.get_effect_shares(
+ element=element, effect=effect, mode='temporal', include_flows=include_flows
+ ),
+ self.get_effect_shares(
+ element=element, effect=effect, mode='periodic', include_flows=include_flows
+ ),
+ ]
+ )
+
+ if mode not in ['temporal', 'periodic']:
+ raise ValueError(f'Mode {mode} is not available. Choose between "temporal" and "periodic".')
+
+ ds = xr.Dataset()
+
+ label = f'{element}->{effect}({mode})'
+ if label in self.solution:
+ ds = xr.Dataset({label: self.solution[label]})
+
+ if include_flows:
+ if element not in self.components:
+ raise ValueError(f'Only use Components when retrieving Effects including flows. Got {element}')
+ flows = [
+ label.split('|')[0] for label in self.components[element].inputs + self.components[element].outputs
+ ]
+ return xr.merge(
+ [ds]
+ + [
+ self.get_effect_shares(element=flow, effect=effect, mode=mode, include_flows=False)
+ for flow in flows
+ ]
+ )
+
+ return ds
+
+ def _compute_effect_total(
+ self,
+ element: str,
+ effect: str,
+ mode: Literal['temporal', 'periodic', 'total'] = 'total',
+ include_flows: bool = False,
+ ) -> xr.DataArray:
+ """Calculates the total effect for a specific element and effect.
+
+ This method computes the total direct and indirect effects for a given element
+ and effect, considering the conversion factors between different effects.
+
+ Args:
+ element: The element identifier for which to calculate total effects.
+ effect: The effect identifier to calculate.
+ mode: The calculation mode. Options are:
+ 'temporal': Returns temporal effects.
+ 'periodic': Returns investment-specific effects.
+ 'total': Returns the sum of temporal effects and periodic effects. Defaults to 'total'.
+ include_flows: Whether to include effects from flows connected to this element.
+
+ Returns:
+ An xarray DataArray containing the total effects, named with pattern
+ '{element}->{effect}' for mode='total' or '{element}->{effect}({mode})'
+ for other modes.
+
+ Raises:
+ ValueError: If the specified effect is not available.
+ """
+ if effect not in self.effects:
+ raise ValueError(f'Effect {effect} is not available.')
+
+ if mode == 'total':
+ temporal = self._compute_effect_total(
+ element=element, effect=effect, mode='temporal', include_flows=include_flows
+ )
+ periodic = self._compute_effect_total(
+ element=element, effect=effect, mode='periodic', include_flows=include_flows
+ )
+ if periodic.isnull().all() and temporal.isnull().all():
+ return xr.DataArray(np.nan)
+ if temporal.isnull().all():
+ return periodic.rename(f'{element}->{effect}')
+ temporal = temporal.sum('time')
+ if periodic.isnull().all():
+ return temporal.rename(f'{element}->{effect}')
+ if 'time' in temporal.indexes:
+ temporal = temporal.sum('time')
+ return periodic + temporal
+
+ total = xr.DataArray(0)
+ share_exists = False
+
+ relevant_conversion_factors = {
+ key[0]: value for key, value in self.effect_share_factors[mode].items() if key[1] == effect
+ }
+ relevant_conversion_factors[effect] = 1 # Share to itself is 1
+
+ for target_effect, conversion_factor in relevant_conversion_factors.items():
+ label = f'{element}->{target_effect}({mode})'
+ if label in self.solution:
+ share_exists = True
+ da = self.solution[label]
+ total = da * conversion_factor + total
+
+ if include_flows:
+ if element not in self.components:
+ raise ValueError(f'Only use Components when retrieving Effects including flows. Got {element}')
+ flows = [
+ label.split('|')[0] for label in self.components[element].inputs + self.components[element].outputs
+ ]
+ for flow in flows:
+ label = f'{flow}->{target_effect}({mode})'
+ if label in self.solution:
+ share_exists = True
+ da = self.solution[label]
+ total = da * conversion_factor + total
+ if not share_exists:
+ total = xr.DataArray(np.nan)
+ return total.rename(f'{element}->{effect}({mode})')
+
+ def _create_effects_dataset(self, mode: Literal['temporal', 'periodic', 'total']) -> xr.Dataset:
+ """Creates a dataset containing effect totals for all components (including their flows).
+ The dataset does contain the direct as well as the indirect effects of each component.
+
+ Args:
+ mode: The calculation mode ('temporal', 'periodic', or 'total').
Returns:
- xr.Dataset: Filtered solution dataset.
+ An xarray Dataset with components as dimension and effects as variables.
"""
- if element is not None:
- return filter_dataset(self[element].solution, variable_dims)
- return filter_dataset(self.solution, variable_dims)
+ ds = xr.Dataset()
+ all_arrays = {}
+ template = None # Template is needed to determine the dimensions of the arrays. This handles the case of no shares for an effect
+
+ components_list = list(self.components)
+
+ # First pass: collect arrays and find template
+ for effect in self.effects:
+ effect_arrays = []
+ for component in components_list:
+ da = self._compute_effect_total(element=component, effect=effect, mode=mode, include_flows=True)
+ effect_arrays.append(da)
+
+ if template is None and (da.dims or not da.isnull().all()):
+ template = da
+
+ all_arrays[effect] = effect_arrays
+
+ # Ensure we have a template
+ if template is None:
+ raise ValueError(
+ f"No template with proper dimensions found for mode '{mode}'. "
+ f'All computed arrays are scalars, which indicates a data issue.'
+ )
+
+ # Second pass: process all effects (guaranteed to include all)
+ for effect in self.effects:
+ dataarrays = all_arrays[effect]
+ component_arrays = []
+
+ for component, arr in zip(components_list, dataarrays, strict=False):
+ # Expand scalar NaN arrays to match template dimensions
+ if not arr.dims and np.isnan(arr.item()):
+ arr = xr.full_like(template, np.nan, dtype=float).rename(arr.name)
+
+ component_arrays.append(arr.expand_dims(component=[component]))
+
+ ds[effect] = xr.concat(component_arrays, dim='component', coords='minimal', join='outer').rename(effect)
+
+ # For now include a test to ensure correctness
+ suffix = {
+ 'temporal': '(temporal)|per_timestep',
+ 'periodic': '(periodic)',
+ 'total': '',
+ }
+ for effect in self.effects:
+ label = f'{effect}{suffix[mode]}'
+ computed = ds[effect].sum('component')
+ found = self.solution[label]
+ if not np.allclose(computed.values, found.fillna(0).values):
+ logger.critical(
+ f'Results for {effect}({mode}) in effects_dataset doesnt match {label}\n{computed=}\n, {found=}'
+ )
+
+ return ds
def plot_heatmap(
self,
@@ -257,9 +694,52 @@ def plot_heatmap(
save: bool | pathlib.Path = False,
show: bool = True,
engine: plotting.PlottingEngine = 'plotly',
+ indexer: dict[FlowSystemDimensions, Any] | None = None,
) -> plotly.graph_objs.Figure | tuple[plt.Figure, plt.Axes]:
+ """
+ Plots a heatmap of the solution of a variable.
+
+ Args:
+ variable_name: The name of the variable to plot.
+ heatmap_timeframes: The timeframes to use for the heatmap.
+ heatmap_timesteps_per_frame: The timesteps per frame to use for the heatmap.
+ color_map: The color map to use for the heatmap.
+ save: Whether to save the plot or not. If a path is provided, the plot will be saved at that location.
+ show: Whether to show the plot or not.
+ engine: The engine to use for plotting. Can be either 'plotly' or 'matplotlib'.
+ indexer: Optional selection dict, e.g., {'scenario': 'base', 'period': 2024}.
+ If None, uses first value for each dimension.
+ If empty dict {}, uses all values.
+
+ Examples:
+ Basic usage (uses first scenario, first period, all time):
+
+ >>> results.plot_heatmap('Battery|charge_state')
+
+ Select specific scenario and period:
+
+ >>> results.plot_heatmap('Boiler(Qth)|flow_rate', indexer={'scenario': 'base', 'period': 2024})
+
+ Time filtering (summer months only):
+
+ >>> results.plot_heatmap(
+ ... 'Boiler(Qth)|flow_rate',
+ ... indexer={
+ ... 'scenario': 'base',
+ ... 'time': results.solution.time[results.solution.time.dt.month.isin([6, 7, 8])],
+ ... },
+ ... )
+
+ Save to specific location:
+
+ >>> results.plot_heatmap(
+ ... 'Boiler(Qth)|flow_rate', indexer={'scenario': 'base'}, save='path/to/my_heatmap.html'
+ ... )
+ """
+ dataarray = self.solution[variable_name]
+
return plot_heatmap(
- dataarray=self.solution[variable_name],
+ dataarray=dataarray,
name=variable_name,
folder=self.folder,
heatmap_timeframes=heatmap_timeframes,
@@ -268,6 +748,7 @@ def plot_heatmap(
save=save,
show=show,
engine=engine,
+ indexer=indexer,
)
def plot_network(
@@ -288,16 +769,9 @@ def plot_network(
path: Save path for network HTML.
show: Whether to display the plot.
"""
- try:
- from .flow_system import FlowSystem
-
- flow_system = FlowSystem.from_dataset(self.flow_system)
- except Exception as e:
- logger.critical(f'Could not reconstruct the flow_system from dataset: {e}')
- return None
if path is None:
path = self.folder / f'{self.name}--network.html'
- return flow_system.plot_network(controls=controls, path=path, show=show)
+ return self.flow_system.plot_network(controls=controls, path=path, show=show)
def to_file(
self,
@@ -329,7 +803,7 @@ def to_file(
paths = fx_io.CalculationResultsPaths(folder, name)
fx_io.save_dataset_to_netcdf(self.solution, paths.solution, compression=compression)
- fx_io.save_dataset_to_netcdf(self.flow_system, paths.flow_system, compression=compression)
+ fx_io.save_dataset_to_netcdf(self.flow_system_data, paths.flow_system, compression=compression)
with open(paths.summary, 'w', encoding='utf-8') as f:
yaml.dump(self.summary, f, allow_unicode=True, sort_keys=False, indent=4, width=1000)
@@ -350,10 +824,6 @@ def to_file(
class _ElementResults:
- @classmethod
- def from_json(cls, calculation_results, json_data: dict) -> _ElementResults:
- return cls(calculation_results, json_data['label'], json_data['variables'], json_data['constraints'])
-
def __init__(
self, calculation_results: CalculationResults, label: str, variables: list[str], constraints: list[str]
):
@@ -386,30 +856,49 @@ def constraints(self) -> linopy.Constraints:
raise ValueError('The linopy model is not available.')
return self._calculation_results.model.constraints[self._constraint_names]
- def filter_solution(self, variable_dims: Literal['scalar', 'time'] | None = None) -> xr.Dataset:
- """Filter element solution by dimension.
+ def filter_solution(
+ self,
+ variable_dims: Literal['scalar', 'time', 'scenario', 'timeonly', 'scenarioonly'] | None = None,
+ timesteps: pd.DatetimeIndex | None = None,
+ scenarios: pd.Index | None = None,
+ contains: str | list[str] | None = None,
+ startswith: str | list[str] | None = None,
+ ) -> xr.Dataset:
+ """
+ Filter the solution to a specific variable dimension and element.
+ If no element is specified, all elements are included.
Args:
- variable_dims: Variable dimension to filter.
-
- Returns:
- xr.Dataset: Filtered solution dataset.
+ variable_dims: The dimension of which to get variables from.
+ - 'scalar': Get scalar variables (without dimensions)
+ - 'time': Get time-dependent variables (with a time dimension)
+ - 'scenario': Get scenario-dependent variables (with ONLY a scenario dimension)
+ - 'timeonly': Get time-dependent variables (with ONLY a time dimension)
+ - 'scenarioonly': Get scenario-dependent variables (with ONLY a scenario dimension)
+ timesteps: Optional time indexes to select. Can be:
+ - pd.DatetimeIndex: Multiple timesteps
+ - str/pd.Timestamp: Single timestep
+ Defaults to all available timesteps.
+ scenarios: Optional scenario indexes to select. Can be:
+ - pd.Index: Multiple scenarios
+ - str/int: Single scenario (int is treated as a label, not an index position)
+ Defaults to all available scenarios.
+ contains: Filter variables that contain this string or strings.
+ If a list is provided, variables must contain ALL strings in the list.
+ startswith: Filter variables that start with this string or strings.
+ If a list is provided, variables must start with ANY of the strings in the list.
"""
- return filter_dataset(self.solution, variable_dims)
+ return filter_dataset(
+ self.solution,
+ variable_dims=variable_dims,
+ timesteps=timesteps,
+ scenarios=scenarios,
+ contains=contains,
+ startswith=startswith,
+ )
class _NodeResults(_ElementResults):
- @classmethod
- def from_json(cls, calculation_results, json_data: dict) -> _NodeResults:
- return cls(
- calculation_results,
- json_data['label'],
- json_data['variables'],
- json_data['constraints'],
- json_data['inputs'],
- json_data['outputs'],
- )
-
def __init__(
self,
calculation_results: CalculationResults,
@@ -418,10 +907,12 @@ def __init__(
constraints: list[str],
inputs: list[str],
outputs: list[str],
+ flows: list[str],
):
super().__init__(calculation_results, label, variables, constraints)
self.inputs = inputs
self.outputs = outputs
+ self.flows = flows
def plot_node_balance(
self,
@@ -429,32 +920,47 @@ def plot_node_balance(
show: bool = True,
colors: plotting.ColorType = 'viridis',
engine: plotting.PlottingEngine = 'plotly',
+ indexer: dict[FlowSystemDimensions, Any] | None = None,
+ mode: Literal['flow_rate', 'flow_hours'] = 'flow_rate',
+ style: Literal['area', 'stacked_bar', 'line'] = 'stacked_bar',
+ drop_suffix: bool = True,
) -> plotly.graph_objs.Figure | tuple[plt.Figure, plt.Axes]:
- """Plot node balance flows.
-
+ """
+ Plots the node balance of the Component or Bus.
Args:
- save: Whether to save plot (path or boolean).
- show: Whether to display plot.
- colors: Color scheme. Also see plotly.
- engine: Plotting engine ('plotly' or 'matplotlib').
-
- Returns:
- Figure object.
+ save: Whether to save the plot or not. If a path is provided, the plot will be saved at that location.
+ show: Whether to show the plot or not.
+ colors: The colors to use for the plot. See `flixopt.plotting.ColorType` for options.
+ engine: The engine to use for plotting. Can be either 'plotly' or 'matplotlib'.
+ indexer: Optional selection dict, e.g., {'scenario': 'base', 'period': 2024}.
+ If None, uses first value for each dimension (except time).
+ If empty dict {}, uses all values.
+ style: The style to use for the dataset. Can be 'flow_rate' or 'flow_hours'.
+ - 'flow_rate': Returns the flow_rates of the Node.
+ - 'flow_hours': Returns the flow_hours of the Node. [flow_hours(t) = flow_rate(t) * dt(t)]. Renames suffixes to |flow_hours.
+ drop_suffix: Whether to drop the suffix from the variable names.
"""
+ ds = self.node_balance(with_last_timestep=True, mode=mode, drop_suffix=drop_suffix, indexer=indexer)
+
+ ds, suffix_parts = _apply_indexer_to_data(ds, indexer, drop=True)
+ suffix = '--' + '-'.join(suffix_parts) if suffix_parts else ''
+
+ title = f'{self.label} (flow rates){suffix}' if mode == 'flow_rate' else f'{self.label} (flow hours){suffix}'
+
if engine == 'plotly':
figure_like = plotting.with_plotly(
- self.node_balance(with_last_timestep=True).to_dataframe(),
+ ds.to_dataframe(),
colors=colors,
- mode='area',
- title=f'Flow rates of {self.label}',
+ style=style,
+ title=title,
)
default_filetype = '.html'
elif engine == 'matplotlib':
figure_like = plotting.with_matplotlib(
- self.node_balance(with_last_timestep=True).to_dataframe(),
+ ds.to_dataframe(),
colors=colors,
- mode='bar',
- title=f'Flow rates of {self.label}',
+ style=style,
+ title=title,
)
default_filetype = '.png'
else:
@@ -462,7 +968,7 @@ def plot_node_balance(
return plotting.export_figure(
figure_like=figure_like,
- default_path=self._calculation_results.folder / f'{self.label} (flow rates)',
+ default_path=self._calculation_results.folder / title,
default_filetype=default_filetype,
user_path=None if isinstance(save, bool) else pathlib.Path(save),
show=show,
@@ -477,9 +983,9 @@ def plot_node_balance_pie(
save: bool | pathlib.Path = False,
show: bool = True,
engine: plotting.PlottingEngine = 'plotly',
+ indexer: dict[FlowSystemDimensions, Any] | None = None,
) -> plotly.graph_objs.Figure | tuple[plt.Figure, list[plt.Axes]]:
"""Plot pie chart of flow hours distribution.
-
Args:
lower_percentage_group: Percentage threshold for "Others" grouping.
colors: Color scheme. Also see plotly.
@@ -487,32 +993,40 @@ def plot_node_balance_pie(
save: Whether to save plot.
show: Whether to display plot.
engine: Plotting engine ('plotly' or 'matplotlib').
+ indexer: Optional selection dict, e.g., {'scenario': 'base', 'period': 2024}.
+ If None, uses first value for each dimension.
+ If empty dict {}, uses all values.
"""
- inputs = (
- sanitize_dataset(
- ds=self.solution[self.inputs],
- threshold=1e-5,
- drop_small_vars=True,
- zero_small_values=True,
- )
- * self._calculation_results.hours_per_timestep
+ inputs = sanitize_dataset(
+ ds=self.solution[self.inputs] * self._calculation_results.hours_per_timestep,
+ threshold=1e-5,
+ drop_small_vars=True,
+ zero_small_values=True,
+ drop_suffix='|',
)
- outputs = (
- sanitize_dataset(
- ds=self.solution[self.outputs],
- threshold=1e-5,
- drop_small_vars=True,
- zero_small_values=True,
- )
- * self._calculation_results.hours_per_timestep
+ outputs = sanitize_dataset(
+ ds=self.solution[self.outputs] * self._calculation_results.hours_per_timestep,
+ threshold=1e-5,
+ drop_small_vars=True,
+ zero_small_values=True,
+ drop_suffix='|',
)
+ inputs, suffix_parts = _apply_indexer_to_data(inputs, indexer, drop=True)
+ outputs, suffix_parts = _apply_indexer_to_data(outputs, indexer, drop=True)
+ suffix = '--' + '-'.join(suffix_parts) if suffix_parts else ''
+
+ title = f'{self.label} (total flow hours){suffix}'
+
+ inputs = inputs.sum('time')
+ outputs = outputs.sum('time')
+
if engine == 'plotly':
figure_like = plotting.dual_pie_with_plotly(
- inputs.to_dataframe().sum(),
- outputs.to_dataframe().sum(),
+ data_left=inputs.to_pandas(),
+ data_right=outputs.to_pandas(),
colors=colors,
- title=f'Flow hours of {self.label}',
+ title=title,
text_info=text_info,
subtitles=('Inputs', 'Outputs'),
legend_title='Flows',
@@ -522,10 +1036,10 @@ def plot_node_balance_pie(
elif engine == 'matplotlib':
logger.debug('Parameter text_info is not supported for matplotlib')
figure_like = plotting.dual_pie_with_matplotlib(
- inputs.to_dataframe().sum(),
- outputs.to_dataframe().sum(),
+ data_left=inputs.to_pandas(),
+ data_right=outputs.to_pandas(),
colors=colors,
- title=f'Total flow hours of {self.label}',
+ title=title,
subtitles=('Inputs', 'Outputs'),
legend_title='Flows',
lower_percentage_group=lower_percentage_group,
@@ -536,7 +1050,7 @@ def plot_node_balance_pie(
return plotting.export_figure(
figure_like=figure_like,
- default_path=self._calculation_results.folder / f'{self.label} (total flow hours)',
+ default_path=self._calculation_results.folder / title,
default_filetype=default_filetype,
user_path=None if isinstance(save, bool) else pathlib.Path(save),
show=show,
@@ -549,9 +1063,29 @@ def node_balance(
negate_outputs: bool = False,
threshold: float | None = 1e-5,
with_last_timestep: bool = False,
+ mode: Literal['flow_rate', 'flow_hours'] = 'flow_rate',
+ drop_suffix: bool = False,
+ indexer: dict[FlowSystemDimensions, Any] | None = None,
) -> xr.Dataset:
- return sanitize_dataset(
- ds=self.solution[self.inputs + self.outputs],
+ """
+ Returns a dataset with the node balance of the Component or Bus.
+ Args:
+ negate_inputs: Whether to negate the input flow_rates of the Node.
+ negate_outputs: Whether to negate the output flow_rates of the Node.
+ threshold: The threshold for small values. Variables with all values below the threshold are dropped.
+ with_last_timestep: Whether to include the last timestep in the dataset.
+ mode: The mode to use for the dataset. Can be 'flow_rate' or 'flow_hours'.
+ - 'flow_rate': Returns the flow_rates of the Node.
+ - 'flow_hours': Returns the flow_hours of the Node. [flow_hours(t) = flow_rate(t) * dt(t)]. Renames suffixes to |flow_hours.
+ drop_suffix: Whether to drop the suffix from the variable names.
+ indexer: Optional selection dict, e.g., {'scenario': 'base', 'period': 2024}.
+ If None, uses first value for each dimension.
+ If empty dict {}, uses all values.
+ """
+ ds = self.solution[self.inputs + self.outputs]
+
+ ds = sanitize_dataset(
+ ds=ds,
threshold=threshold,
timesteps=self._calculation_results.timesteps_extra if with_last_timestep else None,
negate=(
@@ -563,8 +1097,17 @@ def node_balance(
if negate_inputs
else None
),
+ drop_suffix='|' if drop_suffix else None,
)
+ ds, _ = _apply_indexer_to_data(ds, indexer, drop=True)
+
+ if mode == 'flow_hours':
+ ds = ds * self._calculation_results.hours_per_timestep
+ ds = ds.rename_vars({var: var.replace('flow_rate', 'flow_hours') for var in ds.data_vars})
+
+ return ds
+
class BusResults(_NodeResults):
"""Results container for energy/material balance nodes in the system."""
@@ -594,48 +1137,68 @@ def plot_charge_state(
show: bool = True,
colors: plotting.ColorType = 'viridis',
engine: plotting.PlottingEngine = 'plotly',
+ style: Literal['area', 'stacked_bar', 'line'] = 'stacked_bar',
+ indexer: dict[FlowSystemDimensions, Any] | None = None,
) -> plotly.graph_objs.Figure:
"""Plot storage charge state over time, combined with the node balance.
Args:
- save: Whether to save plot.
- show: Whether to display plot.
+ save: Whether to save the plot or not. If a path is provided, the plot will be saved at that location.
+ show: Whether to show the plot or not.
colors: Color scheme. Also see plotly.
- engine: Plotting engine (only 'plotly' supported).
-
- Returns:
- plotly.graph_objs.Figure: Charge state plot.
+ engine: Plotting engine to use. Only 'plotly' is implemented atm.
+ style: The colors to use for the plot. See `flixopt.plotting.ColorType` for options.
+ indexer: Optional selection dict, e.g., {'scenario': 'base', 'period': 2024}.
+ If None, uses first value for each dimension.
+ If empty dict {}, uses all values.
Raises:
ValueError: If component is not a storage.
"""
- if engine != 'plotly':
- raise NotImplementedError(
- f'Plotting engine "{engine}" not implemented for ComponentResults.plot_charge_state.'
- )
-
if not self.is_storage:
raise ValueError(f'Cant plot charge_state. "{self.label}" is not a storage')
- fig = plotting.with_plotly(
- self.node_balance(with_last_timestep=True).to_dataframe(),
- colors=colors,
- mode='area',
- title=f'Operation Balance of {self.label}',
- )
+ ds = self.node_balance(with_last_timestep=True, indexer=indexer)
+ charge_state = self.charge_state
+
+ ds, suffix_parts = _apply_indexer_to_data(ds, indexer, drop=True)
+ charge_state, suffix_parts = _apply_indexer_to_data(charge_state, indexer, drop=True)
+ suffix = '--' + '-'.join(suffix_parts) if suffix_parts else ''
- # TODO: Use colors for charge state?
+ title = f'Operation Balance of {self.label}{suffix}'
- charge_state = self.charge_state.to_dataframe()
- fig.add_trace(
- plotly.graph_objs.Scatter(
- x=charge_state.index, y=charge_state.values.flatten(), mode='lines', name=self._charge_state
+ if engine == 'plotly':
+ fig = plotting.with_plotly(
+ ds.to_dataframe(),
+ colors=colors,
+ style=style,
+ title=title,
)
- )
+
+ # TODO: Use colors for charge state?
+
+ charge_state = charge_state.to_dataframe()
+ fig.add_trace(
+ plotly.graph_objs.Scatter(
+ x=charge_state.index, y=charge_state.values.flatten(), mode='lines', name=self._charge_state
+ )
+ )
+ elif engine == 'matplotlib':
+ fig, ax = plotting.with_matplotlib(
+ ds.to_dataframe(),
+ colors=colors,
+ style=style,
+ title=title,
+ )
+
+ charge_state = charge_state.to_dataframe()
+ ax.plot(charge_state.index, charge_state.values.flatten(), label=self._charge_state)
+ fig.tight_layout()
+ fig = fig, ax
return plotting.export_figure(
fig,
- default_path=self._calculation_results.folder / f'{self.label} (charge state)',
+ default_path=self._calculation_results.folder / title,
default_filetype='.html',
user_path=None if isinstance(save, bool) else pathlib.Path(save),
show=show,
@@ -692,6 +1255,42 @@ def get_shares_from(self, element: str):
return self.solution[[name for name in self._variable_names if name.startswith(f'{element}->')]]
+class FlowResults(_ElementResults):
+ def __init__(
+ self,
+ calculation_results: CalculationResults,
+ label: str,
+ variables: list[str],
+ constraints: list[str],
+ start: str,
+ end: str,
+ component: str,
+ ):
+ super().__init__(calculation_results, label, variables, constraints)
+ self.start = start
+ self.end = end
+ self.component = component
+
+ @property
+ def flow_rate(self) -> xr.DataArray:
+ return self.solution[f'{self.label}|flow_rate']
+
+ @property
+ def flow_hours(self) -> xr.DataArray:
+ return (self.flow_rate * self._calculation_results.hours_per_timestep).rename(f'{self.label}|flow_hours')
+
+ @property
+ def size(self) -> xr.DataArray:
+ name = f'{self.label}|size'
+ if name in self.solution:
+ return self.solution[name]
+ try:
+ return self._calculation_results.flow_system.flows[self.label].size.rename(name)
+ except _FlowSystemRestorationError:
+ logger.critical(f'Size of flow {self.label}.size not availlable. Returning NaN')
+ return xr.DataArray(np.nan).rename(name)
+
+
class SegmentedCalculationResults:
"""Results container for segmented optimization calculations with temporal decomposition.
@@ -780,7 +1379,7 @@ class SegmentedCalculationResults:
identify potential issues from segmentation approach.
Common Use Cases:
- - **Large-Scale Analysis**: Annual or multi-year optimization results
+ - **Large-Scale Analysis**: Annual or multi-period optimization results
- **Memory-Constrained Systems**: Results from systems exceeding hardware limits
- **Segment Validation**: Verifying segmentation approach effectiveness
- **Performance Monitoring**: Comparing segmented vs. full-horizon solutions
@@ -816,7 +1415,7 @@ def from_file(cls, folder: str | pathlib.Path, name: str):
with open(path.with_suffix('.json'), encoding='utf-8') as f:
meta_data = json.load(f)
return cls(
- [CalculationResults.from_file(folder, name) for name in meta_data['sub_calculations']],
+ [CalculationResults.from_file(folder, sub_name) for sub_name in meta_data['sub_calculations']],
all_timesteps=pd.DatetimeIndex(
[datetime.datetime.fromisoformat(date) for date in meta_data['all_timesteps']], name='time'
),
@@ -841,7 +1440,7 @@ def __init__(
self.overlap_timesteps = overlap_timesteps
self.name = name
self.folder = pathlib.Path(folder) if folder is not None else pathlib.Path.cwd() / 'results'
- self.hours_per_timestep = TimeSeriesCollection.calculate_hours_per_timestep(self.all_timesteps)
+ self.hours_per_timestep = FlowSystem.calculate_hours_per_timestep(self.all_timesteps)
@property
def meta_data(self) -> dict[str, int | list[str]]:
@@ -926,7 +1525,7 @@ def to_file(self, folder: str | pathlib.Path | None = None, name: str | None = N
f'Folder {folder} and its parent do not exist. Please create them first.'
) from e
for segment in self.segment_results:
- segment.to_file(folder=folder, name=f'{name}-{segment.name}', compression=compression)
+ segment.to_file(folder=folder, name=segment.name, compression=compression)
with open(path.with_suffix('.json'), 'w', encoding='utf-8') as f:
json.dump(self.meta_data, f, indent=4, ensure_ascii=False)
@@ -943,6 +1542,7 @@ def plot_heatmap(
save: bool | pathlib.Path = False,
show: bool = True,
engine: plotting.PlottingEngine = 'plotly',
+ indexer: dict[str, Any] | None = None,
):
"""Plot heatmap of time series data.
@@ -956,10 +1556,14 @@ def plot_heatmap(
save: Whether to save plot.
show: Whether to display plot.
engine: Plotting engine.
-
- Returns:
- Figure object.
+ indexer: Optional selection dict, e.g., {'scenario': 'base', 'period': 2024}.
+ If None, uses first value for each dimension.
+ If empty dict {}, uses all values.
"""
+ dataarray, suffix_parts = _apply_indexer_to_data(dataarray, indexer, drop=True)
+ suffix = '--' + '-'.join(suffix_parts) if suffix_parts else ''
+ name = name if not suffix_parts else name + suffix
+
heatmap_data = plotting.heat_map_data_from_df(
dataarray.to_dataframe(name), heatmap_timeframes, heatmap_timesteps_per_frame, 'ffill'
)
@@ -996,6 +1600,7 @@ def sanitize_dataset(
negate: list[str] | None = None,
drop_small_vars: bool = True,
zero_small_values: bool = False,
+ drop_suffix: str | None = None,
) -> xr.Dataset:
"""Clean dataset by handling small values and reindexing time.
@@ -1006,9 +1611,7 @@ def sanitize_dataset(
negate: Variables to negate.
drop_small_vars: Whether to drop variables below threshold.
zero_small_values: Whether to zero values below threshold.
-
- Returns:
- xr.Dataset: Sanitized dataset.
+ drop_suffix: Drop suffix of data var names. Split by the provided str.
"""
# Create a copy to avoid modifying the original
ds = ds.copy()
@@ -1044,28 +1647,206 @@ def sanitize_dataset(
if timesteps is not None and not ds.indexes['time'].equals(timesteps):
ds = ds.reindex({'time': timesteps}, fill_value=np.nan)
+ if drop_suffix is not None:
+ if not isinstance(drop_suffix, str):
+ raise ValueError(f'Only pass str values to drop suffixes. Got {drop_suffix}')
+ unique_dict = {}
+ for var in ds.data_vars:
+ new_name = var.split(drop_suffix)[0]
+
+ # If name already exists, keep original name
+ if new_name in unique_dict.values():
+ unique_dict[var] = var
+ else:
+ unique_dict[var] = new_name
+ ds = ds.rename(unique_dict)
+
return ds
def filter_dataset(
ds: xr.Dataset,
- variable_dims: Literal['scalar', 'time'] | None = None,
+ variable_dims: Literal['scalar', 'time', 'scenario', 'timeonly', 'scenarioonly'] | None = None,
+ timesteps: pd.DatetimeIndex | str | pd.Timestamp | None = None,
+ scenarios: pd.Index | str | int | None = None,
+ contains: str | list[str] | None = None,
+ startswith: str | list[str] | None = None,
) -> xr.Dataset:
- """Filter dataset by variable dimensions.
+ """Filter dataset by variable dimensions, indexes, and with string filters for variable names.
Args:
- ds: Dataset to filter.
- variable_dims: Variable dimension to filter ('scalar' or 'time').
+ ds: The dataset to filter.
+ variable_dims: The dimension of which to get variables from.
+ - 'scalar': Get scalar variables (without dimensions)
+ - 'time': Get time-dependent variables (with a time dimension)
+ - 'scenario': Get scenario-dependent variables (with ONLY a scenario dimension)
+ - 'timeonly': Get time-dependent variables (with ONLY a time dimension)
+ - 'scenarioonly': Get scenario-dependent variables (with ONLY a scenario dimension)
+ timesteps: Optional time indexes to select. Can be:
+ - pd.DatetimeIndex: Multiple timesteps
+ - str/pd.Timestamp: Single timestep
+ Defaults to all available timesteps.
+ scenarios: Optional scenario indexes to select. Can be:
+ - pd.Index: Multiple scenarios
+ - str/int: Single scenario (int is treated as a label, not an index position)
+ Defaults to all available scenarios.
+ contains: Filter variables that contain this string or strings.
+ If a list is provided, variables must contain ALL strings in the list.
+ startswith: Filter variables that start with this string or strings.
+ If a list is provided, variables must start with ANY of the strings in the list.
+ """
+ # First filter by dimensions
+ filtered_ds = ds.copy()
+ if variable_dims is not None:
+ if variable_dims == 'scalar':
+ filtered_ds = filtered_ds[[v for v in filtered_ds.data_vars if not filtered_ds[v].dims]]
+ elif variable_dims == 'time':
+ filtered_ds = filtered_ds[[v for v in filtered_ds.data_vars if 'time' in filtered_ds[v].dims]]
+ elif variable_dims == 'scenario':
+ filtered_ds = filtered_ds[[v for v in filtered_ds.data_vars if 'scenario' in filtered_ds[v].dims]]
+ elif variable_dims == 'timeonly':
+ filtered_ds = filtered_ds[[v for v in filtered_ds.data_vars if filtered_ds[v].dims == ('time',)]]
+ elif variable_dims == 'scenarioonly':
+ filtered_ds = filtered_ds[[v for v in filtered_ds.data_vars if filtered_ds[v].dims == ('scenario',)]]
+ else:
+ raise ValueError(f'Unknown variable_dims "{variable_dims}" for filter_dataset')
+
+ # Filter by 'contains' parameter
+ if contains is not None:
+ if isinstance(contains, str):
+ # Single string - keep variables that contain this string
+ filtered_ds = filtered_ds[[v for v in filtered_ds.data_vars if contains in v]]
+ elif isinstance(contains, list) and all(isinstance(s, str) for s in contains):
+ # List of strings - keep variables that contain ALL strings in the list
+ filtered_ds = filtered_ds[[v for v in filtered_ds.data_vars if all(s in v for s in contains)]]
+ else:
+ raise TypeError(f"'contains' must be a string or list of strings, got {type(contains)}")
+
+ # Filter by 'startswith' parameter
+ if startswith is not None:
+ if isinstance(startswith, str):
+ # Single string - keep variables that start with this string
+ filtered_ds = filtered_ds[[v for v in filtered_ds.data_vars if v.startswith(startswith)]]
+ elif isinstance(startswith, list) and all(isinstance(s, str) for s in startswith):
+ # List of strings - keep variables that start with ANY of the strings in the list
+ filtered_ds = filtered_ds[[v for v in filtered_ds.data_vars if any(v.startswith(s) for s in startswith)]]
+ else:
+ raise TypeError(f"'startswith' must be a string or list of strings, got {type(startswith)}")
+
+ # Handle time selection if needed
+ if timesteps is not None and 'time' in filtered_ds.dims:
+ try:
+ filtered_ds = filtered_ds.sel(time=timesteps)
+ except KeyError as e:
+ available_times = set(filtered_ds.indexes['time'])
+ requested_times = set([timesteps]) if not isinstance(timesteps, pd.Index) else set(timesteps)
+ missing_times = requested_times - available_times
+ raise ValueError(
+ f'Timesteps not found in dataset: {missing_times}. Available times: {available_times}'
+ ) from e
+
+ # Handle scenario selection if needed
+ if scenarios is not None and 'scenario' in filtered_ds.dims:
+ try:
+ filtered_ds = filtered_ds.sel(scenario=scenarios)
+ except KeyError as e:
+ available_scenarios = set(filtered_ds.indexes['scenario'])
+ requested_scenarios = set([scenarios]) if not isinstance(scenarios, pd.Index) else set(scenarios)
+ missing_scenarios = requested_scenarios - available_scenarios
+ raise ValueError(
+ f'Scenarios not found in dataset: {missing_scenarios}. Available scenarios: {available_scenarios}'
+ ) from e
+
+ return filtered_ds
+
+
+def filter_dataarray_by_coord(da: xr.DataArray, **kwargs: str | list[str] | None) -> xr.DataArray:
+ """Filter flows by node and component attributes.
+
+ Filters are applied in the order they are specified. All filters must match for an edge to be included.
+
+ To recombine filtered dataarrays, use `xr.concat`.
+
+ xr.concat([res.sizes(start='Fernwärme'), res.sizes(end='Fernwärme')], dim='flow')
+
+ Args:
+ da: Flow DataArray with network metadata coordinates.
+ **kwargs: Coord filters as name=value pairs.
Returns:
- xr.Dataset: Filtered dataset.
+ Filtered DataArray with matching edges.
+
+ Raises:
+ AttributeError: If required coordinates are missing.
+ ValueError: If specified nodes don't exist or no matches found.
"""
- if variable_dims is None:
- return ds
- if variable_dims == 'scalar':
- return ds[[name for name, da in ds.data_vars.items() if len(da.dims) == 0]]
- elif variable_dims == 'time':
- return ds[[name for name, da in ds.data_vars.items() if 'time' in da.dims]]
+ # Helper function to process filters
+ def apply_filter(array, coord_name: str, coord_values: Any | list[Any]):
+ # Verify coord exists
+ if coord_name not in array.coords:
+ raise AttributeError(f"Missing required coordinate '{coord_name}'")
+
+ # Convert single value to list
+ val_list = [coord_values] if isinstance(coord_values, str) else coord_values
+
+ # Verify coord_values exist
+ available = set(array[coord_name].values)
+ missing = [v for v in val_list if v not in available]
+ if missing:
+ raise ValueError(f'{coord_name.title()} value(s) not found: {missing}')
+
+ # Apply filter
+ return array.where(
+ array[coord_name].isin(val_list) if isinstance(coord_values, list) else array[coord_name] == coord_values,
+ drop=True,
+ )
+
+ # Apply filters from kwargs
+ filters = {k: v for k, v in kwargs.items() if v is not None}
+ try:
+ for coord, values in filters.items():
+ da = apply_filter(da, coord, values)
+ except ValueError as e:
+ raise ValueError(f'No edges match criteria: {filters}') from e
+
+ # Verify results exist
+ if da.size == 0:
+ raise ValueError(f'No edges match criteria: {filters}')
+
+ return da
+
+
+def _apply_indexer_to_data(
+ data: xr.DataArray | xr.Dataset, indexer: dict[str, Any] | None = None, drop=False
+) -> tuple[xr.DataArray | xr.Dataset, list[str]]:
+ """
+ Apply indexer selection or auto-select first values for non-time dimensions.
+
+ Args:
+ data: xarray Dataset or DataArray
+ indexer: Optional selection dict
+ If None, uses first value for each dimension (except time).
+ If empty dict {}, uses all values.
+
+ Returns:
+ Tuple of (selected_data, selection_string)
+ """
+ selection_string = []
+
+ if indexer is not None:
+ # User provided indexer
+ data = data.sel(indexer, drop=drop)
+ selection_string.extend(f'{v}[{k}]' for k, v in indexer.items())
else:
- raise ValueError(f'Not allowed value for "filter_dataset()": {variable_dims=}')
+ # Auto-select first value for each dimension except 'time'
+ selection = {}
+ for dim in data.dims:
+ if dim != 'time' and dim in data.coords:
+ first_value = data.coords[dim].values[0]
+ selection[dim] = first_value
+ selection_string.append(f'{first_value}[{dim}]')
+ if selection:
+ data = data.sel(selection, drop=drop)
+
+ return data, selection_string
diff --git a/flixopt/structure.py b/flixopt/structure.py
index c5519066c..72efc3df2 100644
--- a/flixopt/structure.py
+++ b/flixopt/structure.py
@@ -8,22 +8,27 @@
import inspect
import json
import logging
-from datetime import datetime
+from dataclasses import dataclass
from io import StringIO
-from typing import TYPE_CHECKING, Any, Literal
+from typing import (
+ TYPE_CHECKING,
+ Any,
+ Literal,
+)
import linopy
import numpy as np
+import pandas as pd
import xarray as xr
from rich.console import Console
from rich.pretty import Pretty
-from .core import TimeSeries, TimeSeriesData
+from . import io as fx_io
+from .core import TimeSeriesData, get_dataarray_stats
if TYPE_CHECKING: # for type checking and preventing circular imports
import pathlib
-
- import pandas as pd
+ from collections.abc import Collection, ItemsView, Iterator
from .effects import EffectCollectionModel
from .flow_system import FlowSystem
@@ -46,215 +51,806 @@ def register_class_for_io(cls):
return cls
-class SystemModel(linopy.Model):
+class SubmodelsMixin:
+ """Mixin that provides submodel functionality for both FlowSystemModel and Submodel."""
+
+ submodels: Submodels
+
+ @property
+ def all_submodels(self) -> list[Submodel]:
+ """Get all submodels including nested ones recursively."""
+ direct_submodels = list(self.submodels.values())
+
+ # Recursively collect nested sub-models
+ nested_submodels = []
+ for submodel in direct_submodels:
+ nested_submodels.extend(submodel.all_submodels)
+
+ return direct_submodels + nested_submodels
+
+ def add_submodels(self, submodel: Submodel, short_name: str = None) -> Submodel:
+ """Register a sub-model with the model"""
+ if short_name is None:
+ short_name = submodel.__class__.__name__
+ if short_name in self.submodels:
+ raise ValueError(f'Short name "{short_name}" already assigned to model')
+ self.submodels.add(submodel, name=short_name)
+
+ return submodel
+
+
+class FlowSystemModel(linopy.Model, SubmodelsMixin):
"""
- The SystemModel is the linopy Model that is used to create the mathematical model of the flow_system.
+ The FlowSystemModel is the linopy Model that is used to create the mathematical model of the flow_system.
It is used to create and store the variables and constraints for the flow_system.
+
+ Args:
+ flow_system: The flow_system that is used to create the model.
+ normalize_weights: Whether to automatically normalize the weights to sum up to 1 when solving.
"""
- def __init__(self, flow_system: FlowSystem):
- """
- Args:
- flow_system: The flow_system that is used to create the model.
- """
+ def __init__(self, flow_system: FlowSystem, normalize_weights: bool):
super().__init__(force_dim_names=True)
self.flow_system = flow_system
- self.time_series_collection = flow_system.time_series_collection
+ self.normalize_weights = normalize_weights
self.effects: EffectCollectionModel | None = None
+ self.submodels: Submodels = Submodels({})
def do_modeling(self):
self.effects = self.flow_system.effects.create_model(self)
- self.effects.do_modeling()
- component_models = [component.create_model(self) for component in self.flow_system.components.values()]
- bus_models = [bus.create_model(self) for bus in self.flow_system.buses.values()]
- for component_model in component_models:
- component_model.do_modeling()
- for bus_model in bus_models: # Buses after Components, because FlowModels are created in ComponentModels
- bus_model.do_modeling()
+ for component in self.flow_system.components.values():
+ component.create_model(self)
+ for bus in self.flow_system.buses.values():
+ bus.create_model(self)
+
+ # Add scenario equality constraints after all elements are modeled
+ self._add_scenario_equality_constraints()
+
+ def _add_scenario_equality_for_parameter_type(
+ self,
+ parameter_type: Literal['flow_rate', 'size'],
+ config: bool | list[str],
+ ):
+ """Add scenario equality constraints for a specific parameter type.
+
+ Args:
+ parameter_type: The type of parameter ('flow_rate' or 'size')
+ config: Configuration value (True = equalize all, False = equalize none, list = equalize these)
+ """
+ if config is False:
+ return # All vary per scenario, no constraints needed
+
+ suffix = f'|{parameter_type}'
+ if config is True:
+ # All should be scenario-independent
+ vars_to_constrain = [var for var in self.variables if var.endswith(suffix)]
+ else:
+ # Only those in the list should be scenario-independent
+ all_vars = [var for var in self.variables if var.endswith(suffix)]
+ to_equalize = {f'{element}{suffix}' for element in config}
+ vars_to_constrain = [var for var in all_vars if var in to_equalize]
+
+ # Validate that all specified variables exist
+ missing_vars = [v for v in vars_to_constrain if v not in self.variables]
+ if missing_vars:
+ param_name = 'scenario_independent_sizes' if parameter_type == 'size' else 'scenario_independent_flow_rates'
+ raise ValueError(f'{param_name} contains invalid labels: {missing_vars}')
+
+ logger.debug(f'Adding scenario equality constraints for {len(vars_to_constrain)} {parameter_type} variables')
+ for var in vars_to_constrain:
+ self.add_constraints(
+ self.variables[var].isel(scenario=0) == self.variables[var].isel(scenario=slice(1, None)),
+ name=f'{var}|scenario_independent',
+ )
+
+ def _add_scenario_equality_constraints(self):
+ """Add equality constraints to equalize variables across scenarios based on FlowSystem configuration."""
+ # Only proceed if we have scenarios
+ if self.flow_system.scenarios is None or len(self.flow_system.scenarios) <= 1:
+ return
+
+ self._add_scenario_equality_for_parameter_type('flow_rate', self.flow_system.scenario_independent_flow_rates)
+ self._add_scenario_equality_for_parameter_type('size', self.flow_system.scenario_independent_sizes)
@property
def solution(self):
solution = super().solution
+ solution['objective'] = self.objective.value
solution.attrs = {
'Components': {
- comp.label_full: comp.model.results_structure()
+ comp.label_full: comp.submodel.results_structure()
for comp in sorted(
self.flow_system.components.values(), key=lambda component: component.label_full.upper()
)
},
'Buses': {
- bus.label_full: bus.model.results_structure()
+ bus.label_full: bus.submodel.results_structure()
for bus in sorted(self.flow_system.buses.values(), key=lambda bus: bus.label_full.upper())
},
'Effects': {
- effect.label_full: effect.model.results_structure()
+ effect.label_full: effect.submodel.results_structure()
for effect in sorted(self.flow_system.effects, key=lambda effect: effect.label_full.upper())
},
+ 'Flows': {
+ flow.label_full: flow.submodel.results_structure()
+ for flow in sorted(self.flow_system.flows.values(), key=lambda flow: flow.label_full.upper())
+ },
}
- return solution.reindex(time=self.time_series_collection.timesteps_extra)
+ return solution.reindex(time=self.flow_system.timesteps_extra)
@property
def hours_per_step(self):
- return self.time_series_collection.hours_per_timestep
+ return self.flow_system.hours_per_timestep
@property
def hours_of_previous_timesteps(self):
- return self.time_series_collection.hours_of_previous_timesteps
+ return self.flow_system.hours_of_previous_timesteps
- @property
- def coords(self) -> tuple[pd.DatetimeIndex]:
- return (self.time_series_collection.timesteps,)
+ def get_coords(
+ self,
+ dims: Collection[str] | None = None,
+ extra_timestep: bool = False,
+ ) -> xr.Coordinates | None:
+ """
+ Returns the coordinates of the model
+
+ Args:
+ dims: The dimensions to include in the coordinates. If None, includes all dimensions
+ extra_timestep: If True, uses extra timesteps instead of regular timesteps
+
+ Returns:
+ The coordinates of the model, or None if no coordinates are available
+
+ Raises:
+ ValueError: If extra_timestep=True but 'time' is not in dims
+ """
+ if extra_timestep and dims is not None and 'time' not in dims:
+ raise ValueError('extra_timestep=True requires "time" to be included in dims')
+
+ if dims is None:
+ coords = dict(self.flow_system.coords)
+ else:
+ coords = {k: v for k, v in self.flow_system.coords.items() if k in dims}
+
+ if extra_timestep and coords:
+ coords['time'] = self.flow_system.timesteps_extra
+
+ return xr.Coordinates(coords) if coords else None
@property
- def coords_extra(self) -> tuple[pd.DatetimeIndex]:
- return (self.time_series_collection.timesteps_extra,)
+ def weights(self) -> int | xr.DataArray:
+ """Returns the weights of the FlowSystem. Normalizes to 1 if normalize_weights is True"""
+ if self.flow_system.weights is not None:
+ weights = self.flow_system.weights
+ else:
+ weights = self.flow_system.fit_to_model_coords('weights', 1, dims=['period', 'scenario'])
+
+ if not self.normalize_weights:
+ return weights
+
+ return weights / weights.sum()
+
+ def __repr__(self) -> str:
+ """
+ Return a string representation of the FlowSystemModel, borrowed from linopy.Model.
+ """
+ # Extract content from existing representations
+ sections = {
+ f'Variables: [{len(self.variables)}]': self.variables.__repr__().split('\n', 2)[2],
+ f'Constraints: [{len(self.constraints)}]': self.constraints.__repr__().split('\n', 2)[2],
+ f'Submodels: [{len(self.submodels)}]': self.submodels.__repr__().split('\n', 2)[2],
+ 'Status': self.status,
+ }
+
+ # Format sections with headers and underlines
+ formatted_sections = []
+ for section_header, section_content in sections.items():
+ formatted_sections.append(f'{section_header}\n{"-" * len(section_header)}\n{section_content}')
+
+ title = f'FlowSystemModel ({self.type})'
+ all_sections = '\n'.join(formatted_sections)
+
+ return f'{title}\n{"=" * len(title)}\n\n{all_sections}'
class Interface:
"""
- This class is used to collect arguments about a Model. Its the base class for all Elements and Models in flixopt.
+ Base class for all Elements and Models in flixopt that provides serialization capabilities.
+
+ This class enables automatic serialization/deserialization of objects containing xarray DataArrays
+ and nested Interface objects to/from xarray Datasets and NetCDF files. It uses introspection
+ of constructor parameters to automatically handle most serialization scenarios.
+
+ Key Features:
+ - Automatic extraction and restoration of xarray DataArrays
+ - Support for nested Interface objects
+ - NetCDF and JSON export/import
+ - Recursive handling of complex nested structures
+
+ Subclasses must implement:
+ transform_data(flow_system): Transform data to match FlowSystem dimensions
"""
- def transform_data(self, flow_system: FlowSystem):
- """Transforms the data of the interface to match the FlowSystem's dimensions"""
- raise NotImplementedError('Every Interface needs a transform_data() method')
+ def transform_data(self, flow_system: FlowSystem, name_prefix: str = '') -> None:
+ """Transform the data of the interface to match the FlowSystem's dimensions.
+
+ Args:
+ flow_system: The FlowSystem containing timing and dimensional information
+ name_prefix: The prefix to use for the names of the variables. Defaults to '', which results in no prefix.
- def infos(self, use_numpy: bool = True, use_element_label: bool = False) -> dict:
+ Raises:
+ NotImplementedError: Must be implemented by subclasses
"""
- Generate a dictionary representation of the object's constructor arguments.
- Excludes default values and empty dictionaries and lists.
- Converts data to be compatible with JSON.
+ raise NotImplementedError('Every Interface subclass needs a transform_data() method')
- Args:
- use_numpy: Whether to convert NumPy arrays to lists. Defaults to True.
- If True, numeric numpy arrays (`np.ndarray`) are preserved as-is.
- If False, they are converted to lists.
- use_element_label: Whether to use the element label instead of the infos of the element. Defaults to False.
- Note that Elements used as keys in dictionaries are always converted to their labels.
+ def _create_reference_structure(self) -> tuple[dict, dict[str, xr.DataArray]]:
+ """
+ Convert all DataArrays to references and extract them.
+ This is the core method that both to_dict() and to_dataset() build upon.
Returns:
- A dictionary representation of the object's constructor arguments.
+ Tuple of (reference_structure, extracted_arrays_dict)
+ Raises:
+ ValueError: If DataArrays don't have unique names or are duplicated
"""
- # Get the constructor arguments and their default values
- init_params = sorted(
- inspect.signature(self.__init__).parameters.items(),
- key=lambda x: (x[0].lower() != 'label', x[0].lower()), # Prioritize 'label'
- )
- # Build a dict of attribute=value pairs, excluding defaults
- details = {'class': ':'.join([cls.__name__ for cls in self.__class__.__mro__])}
- for name, param in init_params:
- if name == 'self':
+ # Get constructor parameters using caching for performance
+ if not hasattr(self, '_cached_init_params'):
+ self._cached_init_params = list(inspect.signature(self.__init__).parameters.keys())
+
+ # Process all constructor parameters
+ reference_structure = {'__class__': self.__class__.__name__}
+ all_extracted_arrays = {}
+
+ for name in self._cached_init_params:
+ if name == 'self': # Skip self and timesteps. Timesteps are directly stored in Datasets
+ continue
+
+ value = getattr(self, name, None)
+
+ if value is None:
continue
- value, default = getattr(self, name, None), param.default
- # Ignore default values and empty dicts and list
- if np.all(value == default) or (isinstance(value, (dict, list)) and not value):
+ if isinstance(value, pd.Index):
+ logger.debug(f'Skipping {name=} because it is an Index')
continue
- details[name] = copy_and_convert_datatypes(value, use_numpy, use_element_label)
- return details
- def to_json(self, path: str | pathlib.Path):
+ # Extract arrays and get reference structure
+ processed_value, extracted_arrays = self._extract_dataarrays_recursive(value, name)
+
+ # Check for array name conflicts
+ conflicts = set(all_extracted_arrays.keys()) & set(extracted_arrays.keys())
+ if conflicts:
+ raise ValueError(
+ f'DataArray name conflicts detected: {conflicts}. '
+ f'Each DataArray must have a unique name for serialization.'
+ )
+
+ # Add extracted arrays to the collection
+ all_extracted_arrays.update(extracted_arrays)
+
+ # Only store in structure if it's not None/empty after processing
+ if processed_value is not None and not self._is_empty_container(processed_value):
+ reference_structure[name] = processed_value
+
+ return reference_structure, all_extracted_arrays
+
+ @staticmethod
+ def _is_empty_container(obj) -> bool:
+ """Check if object is an empty container (dict, list, tuple, set)."""
+ return isinstance(obj, (dict, list, tuple, set)) and len(obj) == 0
+
+ def _extract_dataarrays_recursive(self, obj, context_name: str = '') -> tuple[Any, dict[str, xr.DataArray]]:
"""
- Saves the element to a json file.
- This not meant to be reloaded and recreate the object, but rather used to document or compare the object.
+ Recursively extract DataArrays from nested structures.
Args:
- path: The path to the json file.
- """
- data = get_compact_representation(self.infos(use_numpy=True, use_element_label=True))
- with open(path, 'w', encoding='utf-8') as f:
- json.dump(data, f, indent=4, ensure_ascii=False)
+ obj: Object to process
+ context_name: Name context for better error messages
- def to_dict(self) -> dict:
- """Convert the object to a dictionary representation."""
- data = {'__class__': self.__class__.__name__}
+ Returns:
+ Tuple of (processed_object_with_references, extracted_arrays_dict)
- # Get the constructor parameters
- init_params = inspect.signature(self.__init__).parameters
+ Raises:
+ ValueError: If DataArrays don't have unique names
+ """
+ extracted_arrays = {}
+
+ # Handle DataArrays directly - use their unique name
+ if isinstance(obj, xr.DataArray):
+ if not obj.name:
+ raise ValueError(
+ f'DataArrays must have a unique name for serialization. '
+ f'Unnamed DataArray found in {context_name}. Please set array.name = "unique_name"'
+ )
- for name in init_params:
- if name == 'self':
- continue
+ array_name = str(obj.name) # Ensure string type
+ if array_name in extracted_arrays:
+ raise ValueError(
+ f'DataArray name "{array_name}" is duplicated in {context_name}. '
+ f'Each DataArray must have a unique name for serialization.'
+ )
- value = getattr(self, name, None)
- data[name] = self._serialize_value(value)
-
- return data
-
- def _serialize_value(self, value: Any):
- """Helper method to serialize a value based on its type."""
- if value is None:
- return None
- elif isinstance(value, Interface):
- return value.to_dict()
- elif isinstance(value, (list, tuple)):
- return self._serialize_list(value)
- elif isinstance(value, dict):
- return self._serialize_dict(value)
+ extracted_arrays[array_name] = obj
+ return f':::{array_name}', extracted_arrays
+
+ # Handle Interface objects - extract their DataArrays too
+ elif isinstance(obj, Interface):
+ try:
+ interface_structure, interface_arrays = obj._create_reference_structure()
+ extracted_arrays.update(interface_arrays)
+ return interface_structure, extracted_arrays
+ except Exception as e:
+ raise ValueError(f'Failed to process nested Interface object in {context_name}: {e}') from e
+
+ # Handle sequences (lists, tuples)
+ elif isinstance(obj, (list, tuple)):
+ processed_items = []
+ for i, item in enumerate(obj):
+ item_context = f'{context_name}[{i}]' if context_name else f'item[{i}]'
+ processed_item, nested_arrays = self._extract_dataarrays_recursive(item, item_context)
+ extracted_arrays.update(nested_arrays)
+ processed_items.append(processed_item)
+ return processed_items, extracted_arrays
+
+ # Handle dictionaries
+ elif isinstance(obj, dict):
+ processed_dict = {}
+ for key, value in obj.items():
+ key_context = f'{context_name}.{key}' if context_name else str(key)
+ processed_value, nested_arrays = self._extract_dataarrays_recursive(value, key_context)
+ extracted_arrays.update(nested_arrays)
+ processed_dict[key] = processed_value
+ return processed_dict, extracted_arrays
+
+ # Handle sets (convert to list for JSON compatibility)
+ elif isinstance(obj, set):
+ processed_items = []
+ for i, item in enumerate(obj):
+ item_context = f'{context_name}.set_item[{i}]' if context_name else f'set_item[{i}]'
+ processed_item, nested_arrays = self._extract_dataarrays_recursive(item, item_context)
+ extracted_arrays.update(nested_arrays)
+ processed_items.append(processed_item)
+ return processed_items, extracted_arrays
+
+ # For all other types, serialize to basic types
else:
- return value
+ return self._serialize_to_basic_types(obj), extracted_arrays
+
+ def _handle_deprecated_kwarg(
+ self,
+ kwargs: dict,
+ old_name: str,
+ new_name: str,
+ current_value: Any = None,
+ transform: callable = None,
+ check_conflict: bool = True,
+ ) -> Any:
+ """
+ Handle a deprecated keyword argument by issuing a warning and returning the appropriate value.
+
+ This centralizes the deprecation pattern used across multiple classes (Source, Sink, InvestParameters, etc.).
+
+ Args:
+ kwargs: Dictionary of keyword arguments to check and modify
+ old_name: Name of the deprecated parameter
+ new_name: Name of the replacement parameter
+ current_value: Current value of the new parameter (if already set)
+ transform: Optional callable to transform the old value before returning (e.g., lambda x: [x] to wrap in list)
+ check_conflict: Whether to check if both old and new parameters are specified (default: True).
+ Note: For parameters with non-None default values (e.g., bool parameters with default=False),
+ set check_conflict=False since we cannot distinguish between an explicit value and the default.
+
+ Returns:
+ The value to use (either from old parameter or current_value)
+
+ Raises:
+ ValueError: If both old and new parameters are specified and check_conflict is True
+
+ Example:
+ # For parameters where None is the default (conflict checking works):
+ value = self._handle_deprecated_kwarg(kwargs, 'old_param', 'new_param', current_value)
+
+ # For parameters with non-None defaults (disable conflict checking):
+ mandatory = self._handle_deprecated_kwarg(
+ kwargs, 'optional', 'mandatory', mandatory,
+ transform=lambda x: not x,
+ check_conflict=False # Cannot detect if mandatory was explicitly passed
+ )
+ """
+ import warnings
+
+ old_value = kwargs.pop(old_name, None)
+ if old_value is not None:
+ warnings.warn(
+ f'The use of the "{old_name}" argument is deprecated. Use the "{new_name}" argument instead.',
+ DeprecationWarning,
+ stacklevel=3, # Stack: this method -> __init__ -> caller
+ )
+ # Check for conflicts: only raise error if both were explicitly provided
+ if check_conflict and current_value is not None:
+ raise ValueError(f'Either {old_name} or {new_name} can be specified, but not both.')
+
+ # Apply transformation if provided
+ if transform is not None:
+ return transform(old_value)
+ return old_value
+
+ return current_value
+
+ def _validate_kwargs(self, kwargs: dict, class_name: str = None) -> None:
+ """
+ Validate that no unexpected keyword arguments are present in kwargs.
+
+ This method uses inspect to get the actual function signature and filters out
+ any parameters that are not defined in the __init__ method, while also
+ handling the special case of 'kwargs' itself which can appear during deserialization.
- def _serialize_list(self, items):
- """Serialize a list of items."""
- return [self._serialize_value(item) for item in items]
+ Args:
+ kwargs: Dictionary of keyword arguments to validate
+ class_name: Optional class name for error messages. If None, uses self.__class__.__name__
+
+ Raises:
+ TypeError: If unexpected keyword arguments are found
+ """
+ if not kwargs:
+ return
- def _serialize_dict(self, d):
- """Serialize a dictionary of items."""
- return {k: self._serialize_value(v) for k, v in d.items()}
+ import inspect
+
+ sig = inspect.signature(self.__init__)
+ known_params = set(sig.parameters.keys()) - {'self', 'kwargs'}
+ # Also filter out 'kwargs' itself which can appear during deserialization
+ extra_kwargs = {k: v for k, v in kwargs.items() if k not in known_params and k != 'kwargs'}
+
+ if extra_kwargs:
+ class_name = class_name or self.__class__.__name__
+ unexpected_params = ', '.join(f"'{param}'" for param in extra_kwargs.keys())
+ raise TypeError(f'{class_name}.__init__() got unexpected keyword argument(s): {unexpected_params}')
@classmethod
- def _deserialize_dict(cls, data: dict) -> dict | Interface:
- if '__class__' in data:
- class_name = data.pop('__class__')
- try:
- class_type = CLASS_REGISTRY[class_name]
- if issubclass(class_type, Interface):
- # Use _deserialize_dict to process the arguments
- processed_data = {k: cls._deserialize_value(v) for k, v in data.items()}
- return class_type(**processed_data)
- else:
- raise ValueError(f'Class "{class_name}" is not an Interface.')
- except (AttributeError, KeyError) as e:
- raise ValueError(f'Class "{class_name}" could not get reconstructed.') from e
- else:
- return {k: cls._deserialize_value(v) for k, v in data.items()}
+ def _resolve_dataarray_reference(
+ cls, reference: str, arrays_dict: dict[str, xr.DataArray]
+ ) -> xr.DataArray | TimeSeriesData:
+ """
+ Resolve a single DataArray reference (:::name) to actual DataArray or TimeSeriesData.
+
+ Args:
+ reference: Reference string starting with ":::"
+ arrays_dict: Dictionary of available DataArrays
+
+ Returns:
+ Resolved DataArray or TimeSeriesData object
+
+ Raises:
+ ValueError: If referenced array is not found
+ """
+ array_name = reference[3:] # Remove ":::" prefix
+ if array_name not in arrays_dict:
+ raise ValueError(f"Referenced DataArray '{array_name}' not found in dataset")
+
+ array = arrays_dict[array_name]
+
+ # Handle null values with warning
+ if array.isnull().any():
+ logger.error(f"DataArray '{array_name}' contains null values. Dropping all-null along present dims.")
+ if 'time' in array.dims:
+ array = array.dropna(dim='time', how='all')
+
+ # Check if this should be restored as TimeSeriesData
+ if TimeSeriesData.is_timeseries_data(array):
+ return TimeSeriesData.from_dataarray(array)
+
+ return array
@classmethod
- def _deserialize_list(cls, data: list) -> list:
- return [cls._deserialize_value(value) for value in data]
+ def _resolve_reference_structure(cls, structure, arrays_dict: dict[str, xr.DataArray]):
+ """
+ Convert reference structure back to actual objects using provided arrays.
+
+ Args:
+ structure: Structure containing references (:::name) or special type markers
+ arrays_dict: Dictionary of available DataArrays
+
+ Returns:
+ Structure with references resolved to actual DataArrays or objects
+
+ Raises:
+ ValueError: If referenced arrays are not found or class is not registered
+ """
+ # Handle DataArray references
+ if isinstance(structure, str) and structure.startswith(':::'):
+ return cls._resolve_dataarray_reference(structure, arrays_dict)
+
+ elif isinstance(structure, list):
+ resolved_list = []
+ for item in structure:
+ resolved_item = cls._resolve_reference_structure(item, arrays_dict)
+ if resolved_item is not None: # Filter out None values from missing references
+ resolved_list.append(resolved_item)
+ return resolved_list
+
+ elif isinstance(structure, dict):
+ if structure.get('__class__'):
+ class_name = structure['__class__']
+ if class_name not in CLASS_REGISTRY:
+ raise ValueError(
+ f"Class '{class_name}' not found in CLASS_REGISTRY. "
+ f'Available classes: {list(CLASS_REGISTRY.keys())}'
+ )
+
+ # This is a nested Interface object - restore it recursively
+ nested_class = CLASS_REGISTRY[class_name]
+ # Remove the __class__ key and process the rest
+ nested_data = {k: v for k, v in structure.items() if k != '__class__'}
+ # Resolve references in the nested data
+ resolved_nested_data = cls._resolve_reference_structure(nested_data, arrays_dict)
+
+ try:
+ return nested_class(**resolved_nested_data)
+ except Exception as e:
+ raise ValueError(f'Failed to create instance of {class_name}: {e}') from e
+ else:
+ # Regular dictionary - resolve references in values
+ resolved_dict = {}
+ for key, value in structure.items():
+ resolved_value = cls._resolve_reference_structure(value, arrays_dict)
+ if resolved_value is not None or value is None: # Keep None values if they were originally None
+ resolved_dict[key] = resolved_value
+ return resolved_dict
+
+ else:
+ return structure
+
+ def _serialize_to_basic_types(self, obj):
+ """
+ Convert object to basic Python types only (no DataArrays, no custom objects).
+
+ Args:
+ obj: Object to serialize
+
+ Returns:
+ Object converted to basic Python types (str, int, float, bool, list, dict)
+ """
+ if obj is None or isinstance(obj, (str, int, float, bool)):
+ return obj
+ elif isinstance(obj, np.integer):
+ return int(obj)
+ elif isinstance(obj, np.floating):
+ return float(obj)
+ elif isinstance(obj, np.bool_):
+ return bool(obj)
+ elif isinstance(obj, (np.ndarray, pd.Series, pd.DataFrame)):
+ return obj.tolist() if hasattr(obj, 'tolist') else list(obj)
+ elif isinstance(obj, dict):
+ return {k: self._serialize_to_basic_types(v) for k, v in obj.items()}
+ elif isinstance(obj, (list, tuple)):
+ return [self._serialize_to_basic_types(item) for item in obj]
+ elif isinstance(obj, set):
+ return [self._serialize_to_basic_types(item) for item in obj]
+ elif hasattr(obj, 'isoformat'): # datetime objects
+ return obj.isoformat()
+ elif hasattr(obj, '__dict__'): # Custom objects with attributes
+ logger.warning(f'Converting custom object {type(obj)} to dict representation: {obj}')
+ return {str(k): self._serialize_to_basic_types(v) for k, v in obj.__dict__.items()}
+ else:
+ # For any other object, try to convert to string as fallback
+ logger.error(f'Converting unknown type {type(obj)} to string: {obj}')
+ return str(obj)
+
+ def to_dataset(self) -> xr.Dataset:
+ """
+ Convert the object to an xarray Dataset representation.
+ All DataArrays become dataset variables, everything else goes to attrs.
+
+ Its recommended to only call this method on Interfaces with all numeric data stored as xr.DataArrays.
+ Interfaces inside a FlowSystem are automatically converted this form after connecting and transforming the FlowSystem.
+
+ Returns:
+ xr.Dataset: Dataset containing all DataArrays with basic objects only in attributes
+
+ Raises:
+ ValueError: If serialization fails due to naming conflicts or invalid data
+ """
+ try:
+ reference_structure, extracted_arrays = self._create_reference_structure()
+ # Create the dataset with extracted arrays as variables and structure as attrs
+ return xr.Dataset(extracted_arrays, attrs=reference_structure)
+ except Exception as e:
+ raise ValueError(
+ f'Failed to convert {self.__class__.__name__} to dataset. Its recommended to only call this method on '
+ f'a fully connected and transformed FlowSystem, or Interfaces inside such a FlowSystem.'
+ f'Original Error: {e}'
+ ) from e
+
+ def to_netcdf(self, path: str | pathlib.Path, compression: int = 0):
+ """
+ Save the object to a NetCDF file.
+
+ Args:
+ path: Path to save the NetCDF file
+ compression: Compression level (0-9)
+
+ Raises:
+ ValueError: If serialization fails
+ IOError: If file cannot be written
+ """
+ try:
+ ds = self.to_dataset()
+ fx_io.save_dataset_to_netcdf(ds, path, compression=compression)
+ except Exception as e:
+ raise OSError(f'Failed to save {self.__class__.__name__} to NetCDF file {path}: {e}') from e
@classmethod
- def _deserialize_value(cls, value: Any):
- """Helper method to deserialize a value based on its type."""
- if value is None:
- return None
- elif isinstance(value, dict):
- return cls._deserialize_dict(value)
- elif isinstance(value, list):
- return cls._deserialize_list(value)
- return value
+ def from_dataset(cls, ds: xr.Dataset) -> Interface:
+ """
+ Create an instance from an xarray Dataset.
+
+ Args:
+ ds: Dataset containing the object data
+
+ Returns:
+ Interface instance
+
+ Raises:
+ ValueError: If dataset format is invalid or class mismatch
+ """
+ try:
+ # Get class name and verify it matches
+ class_name = ds.attrs.get('__class__')
+ if class_name and class_name != cls.__name__:
+ logger.warning(f"Dataset class '{class_name}' doesn't match target class '{cls.__name__}'")
+
+ # Get the reference structure from attrs
+ reference_structure = dict(ds.attrs)
+
+ # Remove the class name since it's not a constructor parameter
+ reference_structure.pop('__class__', None)
+
+ # Create arrays dictionary from dataset variables
+ arrays_dict = {name: array for name, array in ds.data_vars.items()}
+
+ # Resolve all references using the centralized method
+ resolved_params = cls._resolve_reference_structure(reference_structure, arrays_dict)
+
+ return cls(**resolved_params)
+ except Exception as e:
+ raise ValueError(f'Failed to create {cls.__name__} from dataset: {e}') from e
@classmethod
- def from_dict(cls, data: dict) -> Interface:
+ def from_netcdf(cls, path: str | pathlib.Path) -> Interface:
"""
- Create an instance from a dictionary representation.
+ Load an instance from a NetCDF file.
Args:
- data: Dictionary containing the data for the object.
+ path: Path to the NetCDF file
+
+ Returns:
+ Interface instance
+
+ Raises:
+ IOError: If file cannot be read
+ ValueError: If file format is invalid
"""
- return cls._deserialize_dict(data)
+ try:
+ ds = fx_io.load_dataset_from_netcdf(path)
+ return cls.from_dataset(ds)
+ except Exception as e:
+ raise OSError(f'Failed to load {cls.__name__} from NetCDF file {path}: {e}') from e
- def __repr__(self):
- # Get the constructor arguments and their current values
- init_signature = inspect.signature(self.__init__)
- init_args = init_signature.parameters
+ def get_structure(self, clean: bool = False, stats: bool = False) -> dict:
+ """
+ Get object structure as a dictionary.
- # Create a dictionary with argument names and their values
- args_str = ', '.join(f'{name}={repr(getattr(self, name, None))}' for name in init_args if name != 'self')
- return f'{self.__class__.__name__}({args_str})'
+ Args:
+ clean: If True, remove None and empty dicts and lists.
+ stats: If True, replace DataArray references with statistics
+
+ Returns:
+ Dictionary representation of the object structure
+ """
+ reference_structure, extracted_arrays = self._create_reference_structure()
+
+ if stats:
+ # Replace references with statistics
+ reference_structure = self._replace_references_with_stats(reference_structure, extracted_arrays)
+
+ if clean:
+ return fx_io.remove_none_and_empty(reference_structure)
+ return reference_structure
+
+ def _replace_references_with_stats(self, structure, arrays_dict: dict[str, xr.DataArray]):
+ """Replace DataArray references with statistical summaries."""
+ if isinstance(structure, str) and structure.startswith(':::'):
+ array_name = structure[3:]
+ if array_name in arrays_dict:
+ return get_dataarray_stats(arrays_dict[array_name])
+ return structure
+
+ elif isinstance(structure, dict):
+ return {k: self._replace_references_with_stats(v, arrays_dict) for k, v in structure.items()}
+
+ elif isinstance(structure, list):
+ return [self._replace_references_with_stats(item, arrays_dict) for item in structure]
+
+ return structure
+
+ def to_json(self, path: str | pathlib.Path):
+ """
+ Save the object to a JSON file.
+ This is meant for documentation and comparison, not for reloading.
+
+ Args:
+ path: The path to the JSON file.
+
+ Raises:
+ IOError: If file cannot be written
+ """
+ try:
+ # Use the stats mode for JSON export (cleaner output)
+ data = self.get_structure(clean=True, stats=True)
+ with open(path, 'w', encoding='utf-8') as f:
+ json.dump(data, f, indent=4, ensure_ascii=False)
+ except Exception as e:
+ raise OSError(f'Failed to save {self.__class__.__name__} to JSON file {path}: {e}') from e
+
+ def __repr__(self):
+ """Return a detailed string representation for debugging."""
+ try:
+ # Get the constructor arguments and their current values
+ init_signature = inspect.signature(self.__init__)
+ init_args = init_signature.parameters
+
+ # Create a dictionary with argument names and their values, with better formatting
+ args_parts = []
+ for name in init_args:
+ if name == 'self':
+ continue
+ value = getattr(self, name, None)
+ # Truncate long representations
+ value_repr = repr(value)
+ if len(value_repr) > 50:
+ value_repr = value_repr[:47] + '...'
+ args_parts.append(f'{name}={value_repr}')
+
+ args_str = ', '.join(args_parts)
+ return f'{self.__class__.__name__}({args_str})'
+ except Exception:
+ # Fallback if introspection fails
+ return f'{self.__class__.__name__}()'
def __str__(self):
- return get_str_representation(self.infos(use_numpy=True, use_element_label=True))
+ """Return a user-friendly string representation."""
+ try:
+ data = self.get_structure(clean=True, stats=True)
+ with StringIO() as output_buffer:
+ console = Console(file=output_buffer, width=1000) # Adjust width as needed
+ console.print(Pretty(data, expand_all=True, indent_guides=True))
+ return output_buffer.getvalue()
+ except Exception:
+ # Fallback if structure generation fails
+ return f'{self.__class__.__name__} instance'
+
+ def copy(self) -> Interface:
+ """
+ Create a copy of the Interface object.
+
+ Uses the existing serialization infrastructure to ensure proper copying
+ of all DataArrays and nested objects.
+
+ Returns:
+ A new instance of the same class with copied data.
+ """
+ # Convert to dataset, copy it, and convert back
+ dataset = self.to_dataset().copy(deep=True)
+ return self.__class__.from_dataset(dataset)
+
+ def __copy__(self):
+ """Support for copy.copy()."""
+ return self.copy()
+
+ def __deepcopy__(self, memo):
+ """Support for copy.deepcopy()."""
+ return self.copy()
class Element(Interface):
@@ -268,14 +864,14 @@ def __init__(self, label: str, meta_data: dict | None = None):
"""
self.label = Element._valid_label(label)
self.meta_data = meta_data if meta_data is not None else {}
- self.model: ElementModel | None = None
+ self.submodel: ElementModel | None = None
def _plausibility_checks(self) -> None:
"""This function is used to do some basic plausibility checks for each Element during initialization.
This is run after all data is transformed to the correct format/type"""
raise NotImplementedError('Every Element needs a _plausibility_checks() method')
- def create_model(self, model: SystemModel) -> ElementModel:
+ def create_model(self, model: FlowSystemModel) -> ElementModel:
raise NotImplementedError('Every Element needs a create_model() method')
@property
@@ -299,64 +895,100 @@ def _valid_label(label: str) -> str:
f'Use any other symbol instead'
)
if label.endswith(' '):
- logger.warning(f'Label "{label}" ends with a space. This will be removed.')
+ logger.error(f'Label "{label}" ends with a space. This will be removed.')
return label.rstrip()
return label
-class Model:
- """Stores Variables and Constraints."""
+class Submodel(SubmodelsMixin):
+ """Stores Variables and Constraints. Its a subset of a FlowSystemModel.
+ Variables and constraints are stored in the main FlowSystemModel, and are referenced here.
+ Can have other Submodels assigned, and can be a Submodel of another Submodel.
+ """
- def __init__(self, model: SystemModel, label_of_element: str, label: str = '', label_full: str | None = None):
+ def __init__(self, model: FlowSystemModel, label_of_element: str, label_of_model: str | None = None):
"""
Args:
- model: The SystemModel that is used to create the model.
+ model: The FlowSystemModel that is used to create the model.
label_of_element: The label of the parent (Element). Used to construct the full label of the model.
- label: The label of the model. Used to construct the full label of the model.
- label_full: The full label of the model. Can overwrite the full label constructed from the other labels.
+ label_of_model: The label of the model. Used as a prefix in all variables and constraints.
"""
self._model = model
self.label_of_element = label_of_element
- self._label = label
- self._label_full = label_full
-
- self._variables_direct: list[str] = []
- self._constraints_direct: list[str] = []
- self.sub_models: list[Model] = []
-
- self._variables_short: dict[str, str] = {}
- self._constraints_short: dict[str, str] = {}
- self._sub_models_short: dict[str, str] = {}
- logger.debug(f'Created {self.__class__.__name__} "{self.label_full}"')
-
- def do_modeling(self):
- raise NotImplementedError('Every Model needs a do_modeling() method')
-
- def add(
- self, item: linopy.Variable | linopy.Constraint | Model, short_name: str | None = None
- ) -> linopy.Variable | linopy.Constraint | Model:
- """
- Add a variable, constraint or sub-model to the model
-
- Args:
- item: The variable, constraint or sub-model to add to the model
- short_name: The short name of the variable, constraint or sub-model. If not provided, the full name is used.
- """
- # TODO: Check uniquenes of short names
- if isinstance(item, linopy.Variable):
- self._variables_direct.append(item.name)
- self._variables_short[item.name] = short_name or item.name
- elif isinstance(item, linopy.Constraint):
- self._constraints_direct.append(item.name)
- self._constraints_short[item.name] = short_name or item.name
- elif isinstance(item, Model):
- self.sub_models.append(item)
- self._sub_models_short[item.label_full] = short_name or item.label_full
- else:
- raise ValueError(
- f'Item must be a linopy.Variable, linopy.Constraint or flixopt.structure.Model, got {type(item)}'
- )
- return item
+ self.label_of_model = label_of_model if label_of_model is not None else self.label_of_element
+
+ self._variables: dict[str, linopy.Variable] = {} # Mapping from short name to variable
+ self._constraints: dict[str, linopy.Constraint] = {} # Mapping from short name to constraint
+ self.submodels: Submodels = Submodels({})
+
+ logger.debug(f'Creating {self.__class__.__name__} "{self.label_full}"')
+ self._do_modeling()
+
+ def add_variables(self, short_name: str = None, **kwargs) -> linopy.Variable:
+ """Create and register a variable in one step"""
+ if kwargs.get('name') is None:
+ if short_name is None:
+ raise ValueError('Short name must be provided when no name is given')
+ kwargs['name'] = f'{self.label_of_model}|{short_name}'
+
+ variable = self._model.add_variables(**kwargs)
+ self.register_variable(variable, short_name)
+ return variable
+
+ def add_constraints(self, expression, short_name: str = None, **kwargs) -> linopy.Constraint:
+ """Create and register a constraint in one step"""
+ if kwargs.get('name') is None:
+ if short_name is None:
+ raise ValueError('Short name must be provided when no name is given')
+ kwargs['name'] = f'{self.label_of_model}|{short_name}'
+
+ constraint = self._model.add_constraints(expression, **kwargs)
+ self.register_constraint(constraint, short_name)
+ return constraint
+
+ def register_variable(self, variable: linopy.Variable, short_name: str = None) -> linopy.Variable:
+ """Register a variable with the model"""
+ if short_name is None:
+ short_name = variable.name
+ elif short_name in self._variables:
+ raise ValueError(f'Short name "{short_name}" already assigned to model variables')
+
+ self._variables[short_name] = variable
+ return variable
+
+ def register_constraint(self, constraint: linopy.Constraint, short_name: str = None) -> linopy.Constraint:
+ """Register a constraint with the model"""
+ if short_name is None:
+ short_name = constraint.name
+ elif short_name in self._constraints:
+ raise ValueError(f'Short name "{short_name}" already assigned to model constraint')
+
+ self._constraints[short_name] = constraint
+ return constraint
+
+ def __getitem__(self, key: str) -> linopy.Variable:
+ """Get a variable by its short name"""
+ if key in self._variables:
+ return self._variables[key]
+ raise KeyError(f'Variable "{key}" not found in model "{self.label_full}"')
+
+ def __contains__(self, name: str) -> bool:
+ """Check if a variable exists in the model"""
+ return name in self._variables or name in self.variables
+
+ def get(self, name: str, default=None):
+ """Get variable by short name, returning default if not found"""
+ try:
+ return self[name]
+ except KeyError:
+ return default
+
+ def get_coords(
+ self,
+ dims: Collection[str] | None = None,
+ extra_timestep: bool = False,
+ ) -> xr.Coordinates | None:
+ return self._model.get_coords(dims=dims, extra_timestep=extra_timestep)
def filter_variables(
self,
@@ -381,252 +1013,158 @@ def filter_variables(
return all_variables[[name for name in all_variables if 'time' in all_variables[name].dims]]
raise ValueError(f'Invalid length "{length}", must be one of "scalar", "time" or None')
- @property
- def label(self) -> str:
- return self._label if self._label else self.label_of_element
-
@property
def label_full(self) -> str:
- """Used to construct the names of variables and constraints"""
- if self._label_full:
- return self._label_full
- elif self._label:
- return f'{self.label_of_element}|{self.label}'
- return self.label_of_element
+ return self.label_of_model
@property
def variables_direct(self) -> linopy.Variables:
- return self._model.variables[self._variables_direct]
+ """Variables of the model, excluding those of sub-models"""
+ return self._model.variables[[var.name for var in self._variables.values()]]
@property
def constraints_direct(self) -> linopy.Constraints:
- return self._model.constraints[self._constraints_direct]
+ """Constraints of the model, excluding those of sub-models"""
+ return self._model.constraints[[con.name for con in self._constraints.values()]]
@property
- def _variables(self) -> list[str]:
- all_variables = self._variables_direct.copy()
- for sub_model in self.sub_models:
- for variable in sub_model._variables:
- if variable in all_variables:
- raise KeyError(
- f"Duplicate key found: '{variable}' in both {self.label_full} and {sub_model.label_full}!"
- )
- all_variables.append(variable)
- return all_variables
+ def constraints(self) -> linopy.Constraints:
+ """All constraints of the model, including those of all sub-models"""
+ names = list(self.constraints_direct) + [
+ constraint_name for submodel in self.submodels.values() for constraint_name in submodel.constraints
+ ]
- @property
- def _constraints(self) -> list[str]:
- all_constraints = self._constraints_direct.copy()
- for sub_model in self.sub_models:
- for constraint in sub_model._constraints:
- if constraint in all_constraints:
- raise KeyError(f"Duplicate key found: '{constraint}' in both main model and submodel!")
- all_constraints.append(constraint)
- return all_constraints
+ return self._model.constraints[names]
@property
def variables(self) -> linopy.Variables:
- return self._model.variables[self._variables]
+ """All variables of the model, including those of all sub-models"""
+ names = list(self.variables_direct) + [
+ variable_name for submodel in self.submodels.values() for variable_name in submodel.variables
+ ]
- @property
- def constraints(self) -> linopy.Constraints:
- return self._model.constraints[self._constraints]
+ return self._model.variables[names]
- @property
- def all_sub_models(self) -> list[Model]:
- return [model for sub_model in self.sub_models for model in [sub_model] + sub_model.all_sub_models]
+ def __repr__(self) -> str:
+ """
+ Return a string representation of the linopy model.
+ """
+ # Extract content from existing representations
+ sections = {
+ f'Variables: [{len(self.variables)}/{len(self._model.variables)}]': self.variables.__repr__().split(
+ '\n', 2
+ )[2],
+ f'Constraints: [{len(self.constraints)}/{len(self._model.constraints)}]': self.constraints.__repr__().split(
+ '\n', 2
+ )[2],
+ f'Submodels: [{len(self.submodels)}]': self.submodels.__repr__().split('\n', 2)[2],
+ }
+ # Format sections with headers and underlines
+ formatted_sections = []
+ for section_header, section_content in sections.items():
+ formatted_sections.append(f'{section_header}\n{"-" * len(section_header)}\n{section_content}')
-class ElementModel(Model):
- """Stores the mathematical Variables and Constraints for Elements"""
+ model_string = f'Submodel "{self.label_of_model}":'
+ all_sections = '\n'.join(formatted_sections)
- def __init__(self, model: SystemModel, element: Element):
- """
- Args:
- model: The SystemModel that is used to create the model.
- element: The element this model is created for.
- """
- super().__init__(model, label_of_element=element.label_full, label=element.label, label_full=element.label_full)
- self.element = element
+ return f'{model_string}\n{"=" * len(model_string)}\n\n{all_sections}'
- def results_structure(self):
- return {
- 'label': self.label,
- 'label_full': self.label_full,
- 'variables': list(self.variables),
- 'constraints': list(self.constraints),
- }
+ @property
+ def hours_per_step(self):
+ return self._model.hours_per_step
+ def _do_modeling(self):
+ """Called at the end of initialization. Override in subclasses to create variables and constraints."""
+ pass
-def copy_and_convert_datatypes(data: Any, use_numpy: bool = True, use_element_label: bool = False) -> Any:
- """
- Converts values in a nested data structure into JSON-compatible types while preserving or transforming numpy arrays
- and custom `Element` objects based on the specified options.
- The function handles various data types and transforms them into a consistent, readable format:
- - Primitive types (`int`, `float`, `str`, `bool`, `None`) are returned as-is.
- - Numpy scalars are converted to their corresponding Python scalar types.
- - Collections (`list`, `tuple`, `set`, `dict`) are recursively processed to ensure all elements are compatible.
- - Numpy arrays are preserved or converted to lists, depending on `use_numpy`.
- - Custom `Element` objects can be represented either by their `label` or their initialization parameters as a dictionary.
- - Timestamps (`datetime`) are converted to ISO 8601 strings.
+@dataclass(repr=False)
+class Submodels:
+ """A simple collection for storing submodels with easy access and representation."""
- Args:
- data: The input data to process, which may be deeply nested and contain a mix of types.
- use_numpy: If `True`, numeric numpy arrays (`np.ndarray`) are preserved as-is. If `False`, they are converted to lists.
- Default is `True`.
- use_element_label: If `True`, `Element` objects are represented by their `label`. If `False`, they are converted into a dictionary
- based on their initialization parameters. Default is `False`.
-
- Returns:
- A transformed version of the input data, containing only JSON-compatible types:
- - `int`, `float`, `str`, `bool`, `None`
- - `list`, `dict`
- - `np.ndarray` (if `use_numpy=True`. This is NOT JSON-compatible)
-
- Raises:
- TypeError: If the data cannot be converted to the specified types.
-
- Examples:
- >>> copy_and_convert_datatypes({'a': np.array([1, 2, 3]), 'b': Element(label='example')})
- {'a': array([1, 2, 3]), 'b': {'class': 'Element', 'label': 'example'}}
-
- >>> copy_and_convert_datatypes({'a': np.array([1, 2, 3]), 'b': Element(label='example')}, use_numpy=False)
- {'a': [1, 2, 3], 'b': {'class': 'Element', 'label': 'example'}}
-
- Notes:
- - The function gracefully handles unexpected types by issuing a warning and returning a deep copy of the data.
- - Empty collections (lists, dictionaries) and default parameter values in `Element` objects are omitted from the output.
- - Numpy arrays with non-numeric data types are automatically converted to lists.
- """
- if isinstance(data, np.integer): # This must be checked before checking for regular int and float!
- return int(data)
- elif isinstance(data, np.floating):
- return float(data)
-
- elif isinstance(data, (int, float, str, bool, type(None))):
- return data
- elif isinstance(data, datetime):
- return data.isoformat()
-
- elif isinstance(data, (tuple, set)):
- return copy_and_convert_datatypes([item for item in data], use_numpy, use_element_label)
- elif isinstance(data, dict):
- return {
- copy_and_convert_datatypes(key, use_numpy, use_element_label=True): copy_and_convert_datatypes(
- value, use_numpy, use_element_label
- )
- for key, value in data.items()
- }
- elif isinstance(data, list): # Shorten arrays/lists to be readable
- if use_numpy and all([isinstance(value, (int, float)) for value in data]):
- return np.array([item for item in data])
- else:
- return [copy_and_convert_datatypes(item, use_numpy, use_element_label) for item in data]
+ data: dict[str, Submodel]
- elif isinstance(data, np.ndarray):
- if not use_numpy:
- return copy_and_convert_datatypes(data.tolist(), use_numpy, use_element_label)
- elif use_numpy and np.issubdtype(data.dtype, np.number):
- return data
- else:
- logger.critical(
- f'An np.array with non-numeric content was found: {data=}.It will be converted to a list instead'
- )
- return copy_and_convert_datatypes(data.tolist(), use_numpy, use_element_label)
+ def __getitem__(self, name: str) -> Submodel:
+ """Get a submodel by its name."""
+ return self.data[name]
- elif isinstance(data, TimeSeries):
- return copy_and_convert_datatypes(data.active_data, use_numpy, use_element_label)
- elif isinstance(data, TimeSeriesData):
- return copy_and_convert_datatypes(data.data, use_numpy, use_element_label)
+ def __getattr__(self, name: str) -> Submodel:
+ """Get a submodel by attribute access."""
+ if name in self.data:
+ return self.data[name]
+ raise AttributeError(f"Submodels has no attribute '{name}'")
- elif isinstance(data, Interface):
- if use_element_label and isinstance(data, Element):
- return data.label
- return data.infos(use_numpy, use_element_label)
- elif isinstance(data, xr.DataArray):
- # TODO: This is a temporary basic work around
- return copy_and_convert_datatypes(data.values, use_numpy, use_element_label)
- else:
- raise TypeError(f'copy_and_convert_datatypes() did get unexpected data of type "{type(data)}": {data=}')
+ def __len__(self) -> int:
+ return len(self.data)
+ def __iter__(self) -> Iterator[str]:
+ return iter(self.data)
-def get_compact_representation(data: Any, array_threshold: int = 50, decimals: int = 2) -> dict:
- """
- Generate a compact json serializable representation of deeply nested data.
- Numpy arrays are statistically described if they exceed a threshold and converted to lists.
+ def __contains__(self, name: str) -> bool:
+ return name in self.data
- Args:
- data (Any): The data to format and represent.
- array_threshold (int): Maximum length of NumPy arrays to display. Longer arrays are statistically described.
- decimals (int): Number of decimal places in which to describe the arrays.
+ def __repr__(self) -> str:
+ """Simple representation of the submodels collection."""
+ if not self.data:
+ return 'flixopt.structure.Submodels:\n----------------------------\n \n'
- Returns:
- dict: A dictionary representation of the data
- """
+ total_vars = sum(len(submodel.variables) for submodel in self.data.values())
+ total_cons = sum(len(submodel.constraints) for submodel in self.data.values())
- def format_np_array_if_found(value: Any) -> Any:
- """Recursively processes the data, formatting NumPy arrays."""
- if isinstance(value, (int, float, str, bool, type(None))):
- return value
- elif isinstance(value, np.ndarray):
- return describe_numpy_arrays(value)
- elif isinstance(value, dict):
- return {format_np_array_if_found(k): format_np_array_if_found(v) for k, v in value.items()}
- elif isinstance(value, (list, tuple, set)):
- return [format_np_array_if_found(v) for v in value]
- else:
- logger.warning(
- f'Unexpected value found when trying to format numpy array numpy array: {type(value)=}; {value=}'
- )
- return value
+ title = (
+ f'flixopt.structure.Submodels ({total_vars} vars, {total_cons} constraints, {len(self.data)} submodels):'
+ )
+ underline = '-' * len(title)
- def describe_numpy_arrays(arr: np.ndarray) -> str | list:
- """Shortens NumPy arrays if they exceed the specified length."""
+ if not self.data:
+ return f'{title}\n{underline}\n \n'
+ sub_models_string = ''
+ for name, submodel in self.data.items():
+ type_name = submodel.__class__.__name__
+ var_count = len(submodel.variables)
+ con_count = len(submodel.constraints)
+ sub_models_string += f'\n * {name} [{type_name}] ({var_count}v/{con_count}c)'
- def normalized_center_of_mass(array: Any) -> float:
- # position in array (0 bis 1 normiert)
- positions = np.linspace(0, 1, len(array)) # weights w_i
- # mass center
- if np.sum(array) == 0:
- return np.nan
- else:
- return np.sum(positions * array) / np.sum(array)
-
- if arr.size > array_threshold: # Calculate basic statistics
- fmt = f'.{decimals}f'
- return (
- f'Array (min={np.min(arr):{fmt}}, max={np.max(arr):{fmt}}, mean={np.mean(arr):{fmt}}, '
- f'median={np.median(arr):{fmt}}, std={np.std(arr):{fmt}}, len={len(arr)}, '
- f'center={normalized_center_of_mass(arr):{fmt}})'
- )
- else:
- return np.around(arr, decimals=decimals).tolist()
+ return f'{title}\n{underline}{sub_models_string}\n'
- # Process the data to handle NumPy arrays
- formatted_data = format_np_array_if_found(copy_and_convert_datatypes(data, use_numpy=True))
+ def items(self) -> ItemsView[str, Submodel]:
+ return self.data.items()
- return formatted_data
+ def keys(self):
+ return self.data.keys()
+ def values(self):
+ return self.data.values()
-def get_str_representation(data: Any, array_threshold: int = 50, decimals: int = 2) -> str:
- """
- Generate a string representation of deeply nested data using `rich.print`.
- NumPy arrays are shortened to the specified length and converted to strings.
+ def add(self, submodel: Submodel, name: str) -> None:
+ """Add a submodel to the collection."""
+ self.data[name] = submodel
- Args:
- data (Any): The data to format and represent.
- array_threshold (int): Maximum length of NumPy arrays to display. Longer arrays are statistically described.
- decimals (int): Number of decimal places in which to describe the arrays.
+ def get(self, name: str, default=None):
+ """Get submodel by name, returning default if not found."""
+ return self.data.get(name, default)
- Returns:
- str: The formatted string representation of the data.
+
+class ElementModel(Submodel):
+ """
+ Stores the mathematical Variables and Constraints for Elements.
+ ElementModels are directly registered in the main FlowSystemModel
"""
- formatted_data = get_compact_representation(data, array_threshold, decimals)
+ def __init__(self, model: FlowSystemModel, element: Element):
+ """
+ Args:
+ model: The FlowSystemModel that is used to create the model.
+ element: The element this model is created for.
+ """
+ self.element = element
+ super().__init__(model, label_of_element=element.label_full, label_of_model=element.label_full)
+ self._model.add_submodels(self, short_name=self.label_of_model)
- # Use Rich to format and print the data
- with StringIO() as output_buffer:
- console = Console(file=output_buffer, width=1000) # Adjust width as needed
- console.print(Pretty(formatted_data, expand_all=True, indent_guides=True))
- return output_buffer.getvalue()
+ def results_structure(self):
+ return {
+ 'label': self.label_full,
+ 'variables': list(self.variables),
+ 'constraints': list(self.constraints),
+ }
diff --git a/flixopt/utils.py b/flixopt/utils.py
index 30ac46c97..f1e12b9dc 100644
--- a/flixopt/utils.py
+++ b/flixopt/utils.py
@@ -5,37 +5,60 @@
from __future__ import annotations
import logging
-from typing import TYPE_CHECKING, Any, Literal
+from typing import Literal
-if TYPE_CHECKING:
- import numpy as np
- import xarray as xr
+import numpy as np
+import xarray as xr
logger = logging.getLogger('flixopt')
-def is_number(number_alias: int | float | str) -> bool:
- """Returns True if value is a number or a number-like string."""
- try:
- float(number_alias)
- return True
- except (ValueError, TypeError):
- return False
+def round_nested_floats(obj, decimals=2):
+ """Recursively round floating point numbers in nested data structures.
+ This function traverses nested data structures (dictionaries, lists) and rounds
+ any floating point numbers to the specified number of decimal places. It handles
+ various data types including NumPy arrays and xarray DataArrays by converting
+ them to lists with rounded values.
-def round_floats(obj, decimals=2):
+ Args:
+ obj: The object to process. Can be a dict, list, float, int, numpy.ndarray,
+ xarray.DataArray, or any other type.
+ decimals (int, optional): Number of decimal places to round to. Defaults to 2.
+
+ Returns:
+ The processed object with the same structure as the input, but with all
+ floating point numbers rounded to the specified precision. NumPy arrays
+ and xarray DataArrays are converted to lists.
+
+ Examples:
+ >>> data = {'a': 3.14159, 'b': [1.234, 2.678]}
+ >>> round_nested_floats(data, decimals=2)
+ {'a': 3.14, 'b': [1.23, 2.68]}
+
+ >>> import numpy as np
+ >>> arr = np.array([1.234, 5.678])
+ >>> round_nested_floats(arr, decimals=1)
+ [1.2, 5.7]
+ """
if isinstance(obj, dict):
- return {k: round_floats(v, decimals) for k, v in obj.items()}
+ return {k: round_nested_floats(v, decimals) for k, v in obj.items()}
elif isinstance(obj, list):
- return [round_floats(v, decimals) for v in obj]
+ return [round_nested_floats(v, decimals) for v in obj]
elif isinstance(obj, float):
return round(obj, decimals)
+ elif isinstance(obj, int):
+ return obj
+ elif isinstance(obj, np.ndarray):
+ return np.round(obj, decimals).tolist()
+ elif isinstance(obj, xr.DataArray):
+ return obj.round(decimals).values.tolist()
return obj
def convert_dataarray(
data: xr.DataArray, mode: Literal['py', 'numpy', 'xarray', 'structure']
-) -> list[Any] | np.ndarray | xr.DataArray | str:
+) -> list | np.ndarray | xr.DataArray | str:
"""
Convert a DataArray to a different format.
diff --git a/mkdocs.yml b/mkdocs.yml
index 98747d987..b7c03faac 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -8,6 +8,56 @@ site_url: https://flixopt.github.io/flixopt/
repo_url: https://github.com/flixOpt/flixopt
repo_name: flixOpt/flixopt
+nav:
+ - Home: index.md
+ - Getting Started: getting-started.md
+ - User Guide:
+ - user-guide/index.md
+ - Recipes: user-guide/recipes/index.md
+ - Mathematical Notation:
+ - Overview: user-guide/mathematical-notation/index.md
+ - Dimensions: user-guide/mathematical-notation/dimensions.md
+ - Elements:
+ - Flow: user-guide/mathematical-notation/elements/Flow.md
+ - Bus: user-guide/mathematical-notation/elements/Bus.md
+ - Storage: user-guide/mathematical-notation/elements/Storage.md
+ - LinearConverter: user-guide/mathematical-notation/elements/LinearConverter.md
+ - Features:
+ - InvestParameters: user-guide/mathematical-notation/features/InvestParameters.md
+ - OnOffParameters: user-guide/mathematical-notation/features/OnOffParameters.md
+ - Piecewise: user-guide/mathematical-notation/features/Piecewise.md
+ - Effects, Penalty & Objective: user-guide/mathematical-notation/effects-penalty-objective.md
+ - Modeling Patterns:
+ - Overview: user-guide/mathematical-notation/modeling-patterns/index.md
+ - Bounds and States: user-guide/mathematical-notation/modeling-patterns/bounds-and-states.md
+ - Duration Tracking: user-guide/mathematical-notation/modeling-patterns/duration-tracking.md
+ - State Transitions: user-guide/mathematical-notation/modeling-patterns/state-transitions.md
+ - Examples: examples/
+ - Contribute: contribute.md
+ - API Reference:
+ - api-reference/index.md
+ - Aggregation: api-reference/aggregation.md
+ - Calculation: api-reference/calculation.md
+ - Commons: api-reference/commons.md
+ - Components: api-reference/components.md
+ - Config: api-reference/config.md
+ - Core: api-reference/core.md
+ - Effects: api-reference/effects.md
+ - Elements: api-reference/elements.md
+ - Features: api-reference/features.md
+ - Flow System: api-reference/flow_system.md
+ - Interface: api-reference/interface.md
+ - IO: api-reference/io.md
+ - Linear Converters: api-reference/linear_converters.md
+ - Modeling: api-reference/modeling.md
+ - Network App: api-reference/network_app.md
+ - Plotting: api-reference/plotting.md
+ - Results: api-reference/results.md
+ - Solvers: api-reference/solvers.md
+ - Structure: api-reference/structure.md
+ - Utils: api-reference/utils.md
+ - Release Notes: changelog/
+
theme:
name: material
@@ -88,9 +138,6 @@ plugins:
- gen-files:
scripts:
- scripts/gen_ref_pages.py
- - literate-nav:
- nav_file: SUMMARY.md
- implicit_index: true # This makes index.md the default landing page
- mkdocstrings: # Handles automatic API documentation generation
default_handler: python # Sets Python as the default language
handlers:
diff --git a/pics/flixopt-icon.svg b/pics/flixopt-icon.svg
index 04a6a6851..08fe340f9 100644
--- a/pics/flixopt-icon.svg
+++ b/pics/flixopt-icon.svg
@@ -1 +1 @@
-
\ No newline at end of file
+
diff --git a/pyproject.toml b/pyproject.toml
index 6a523bbb4..3bb68efb4 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -48,6 +48,9 @@ dependencies = [
# Visualization
"matplotlib >= 3.5.2, < 4",
"plotly >= 5.15.0, < 7",
+
+ # Fix for numexpr compatibility issue with numpy 1.26.4 on Python 3.10
+ "numexpr >= 2.8.4, < 2.14; python_version < '3.11'", # Avoid 2.14.0 on older Python
]
[project.optional-dependencies]
@@ -116,10 +119,6 @@ documentation = "https://flixopt.github.io/flixopt/"
where = ["."]
include = ["flixopt*"]
exclude = ["tests*", "docs*", "examples*", "Tutorials*"]
-
-[tool.setuptools.package-data]
-"flixopt" = ["config.yaml"]
-
[tool.setuptools]
include-package-data = true
@@ -187,7 +186,28 @@ markers = [
"slow: marks tests as slow",
"examples: marks example tests (run only on releases)",
]
-addopts = "-m 'not examples'" # Skip examples by default
+addopts = '-m "not examples"' # Skip examples by default
+
+# Warning filter configuration for pytest
+# Filters are processed in order; first match wins
+# Format: "action:message:category:module"
+filterwarnings = [
+ # === Default behavior: show all warnings ===
+ "default",
+
+ # === Treat flixopt warnings as errors (strict mode for our code) ===
+ # This ensures we catch deprecations, future changes, and user warnings in our own code
+ "error::DeprecationWarning:flixopt",
+ "error::FutureWarning:flixopt",
+ "error::UserWarning:flixopt",
+
+ # === Third-party warnings (mirrored from __init__.py) ===
+ "ignore:.*minimal value.*exceeds.*:UserWarning:tsam",
+ "ignore:Coordinates across variables not equal:UserWarning:linopy",
+ "ignore:.*join will change from join='outer' to join='exact'.*:FutureWarning:linopy",
+ "ignore:numpy\\.ndarray size changed:RuntimeWarning",
+ "ignore:.*network visualization is still experimental.*:UserWarning:flixopt",
+]
[tool.bandit]
skips = ["B101", "B506"] # assert_used and yaml_load
diff --git a/scripts/gen_ref_pages.py b/scripts/gen_ref_pages.py
index f2de8a701..3c8eb600a 100644
--- a/scripts/gen_ref_pages.py
+++ b/scripts/gen_ref_pages.py
@@ -1,4 +1,4 @@
-"""Generate the code reference pages and navigation."""
+"""Generate the code reference pages."""
import sys
from pathlib import Path
@@ -9,11 +9,11 @@
root = Path(__file__).parent.parent
sys.path.insert(0, str(root))
-nav = mkdocs_gen_files.Nav()
-
src = root / 'flixopt'
api_dir = 'api-reference'
+generated_files = []
+
for path in sorted(src.rglob('*.py')):
module_path = path.relative_to(src).with_suffix('')
doc_path = path.relative_to(src).with_suffix('.md')
@@ -30,10 +30,8 @@
elif parts[-1] == '__main__' or parts[-1].startswith('_'):
continue
- # Only add to navigation if there are actual parts
+ # Only generate documentation if there are actual parts
if parts:
- nav[parts] = doc_path.as_posix()
-
# Generate documentation file - always using the flixopt prefix
with mkdocs_gen_files.open(full_doc_path, 'w') as fd:
# Use 'flixopt.' prefix for all module references
@@ -41,6 +39,7 @@
fd.write(f'::: {module_id}\n options:\n inherited_members: true\n')
mkdocs_gen_files.set_edit_path(full_doc_path, path.relative_to(root))
+ generated_files.append(str(full_doc_path))
# Create an index file for the API reference
with mkdocs_gen_files.open(f'{api_dir}/index.md', 'w') as index_file:
@@ -50,5 +49,7 @@
'For more information on how to use the classes and functions, see the [User Guide](../user-guide/index.md) section.\n'
)
-with mkdocs_gen_files.open(f'{api_dir}/SUMMARY.md', 'w') as nav_file:
- nav_file.writelines(nav.build_literate_nav())
+# Print generated files for validation
+print(f'Generated {len(generated_files)} API reference files:')
+for file in sorted(generated_files):
+ print(f' - {file}')
diff --git a/tests/conftest.py b/tests/conftest.py
index ac2bab5f4..ac5255562 100644
--- a/tests/conftest.py
+++ b/tests/conftest.py
@@ -5,6 +5,7 @@
"""
import os
+from collections.abc import Iterable
import linopy.testing
import numpy as np
@@ -13,7 +14,11 @@
import xarray as xr
import flixopt as fx
-from flixopt.structure import SystemModel
+from flixopt.structure import FlowSystemModel
+
+# ============================================================================
+# SOLVER FIXTURES
+# ============================================================================
@pytest.fixture()
@@ -23,28 +28,371 @@ def highs_solver():
@pytest.fixture()
def gurobi_solver():
+ pytest.importorskip('gurobipy', reason='Gurobi not available in this environment')
return fx.solvers.GurobiSolver(mip_gap=0, time_limit_seconds=300)
-@pytest.fixture(params=[highs_solver, gurobi_solver])
+@pytest.fixture(params=[highs_solver, gurobi_solver], ids=['highs', 'gurobi'])
def solver_fixture(request):
return request.getfixturevalue(request.param.__name__)
-# Custom assertion function
-def assert_almost_equal_numeric(
- actual, desired, err_msg, relative_error_range_in_percent=0.011, absolute_tolerance=1e-9
-):
- """
- Custom assertion function for comparing numeric values with relative and absolute tolerances
- """
- relative_tol = relative_error_range_in_percent / 100
+# =================================
+# COORDINATE CONFIGURATION FIXTURES
+# =================================
+
+
+@pytest.fixture(
+ params=[
+ {
+ 'timesteps': pd.date_range('2020-01-01', periods=10, freq='h', name='time'),
+ 'periods': None,
+ 'scenarios': None,
+ },
+ {
+ 'timesteps': pd.date_range('2020-01-01', periods=10, freq='h', name='time'),
+ 'periods': None,
+ 'scenarios': pd.Index(['A', 'B'], name='scenario'),
+ },
+ {
+ 'timesteps': pd.date_range('2020-01-01', periods=10, freq='h', name='time'),
+ 'periods': pd.Index([2020, 2030, 2040], name='period'),
+ 'scenarios': None,
+ },
+ {
+ 'timesteps': pd.date_range('2020-01-01', periods=10, freq='h', name='time'),
+ 'periods': pd.Index([2020, 2030, 2040], name='period'),
+ 'scenarios': pd.Index(['A', 'B'], name='scenario'),
+ },
+ ],
+ ids=['time_only', 'time+scenarios', 'time+periods', 'time+periods+scenarios'],
+)
+def coords_config(request):
+ """Coordinate configurations for parametrized testing."""
+ return request.param
+
+
+# ============================================================================
+# HIERARCHICAL ELEMENT LIBRARY
+# ============================================================================
+
+
+class Buses:
+ """Standard buses used across flow systems"""
+
+ @staticmethod
+ def electricity():
+ return fx.Bus('Strom')
+
+ @staticmethod
+ def heat():
+ return fx.Bus('Fernwärme')
+
+ @staticmethod
+ def gas():
+ return fx.Bus('Gas')
+
+ @staticmethod
+ def coal():
+ return fx.Bus('Kohle')
+
+ @staticmethod
+ def defaults():
+ """Get all standard buses at once"""
+ return [Buses.electricity(), Buses.heat(), Buses.gas()]
+
+
+class Effects:
+ """Standard effects used across flow systems"""
+
+ @staticmethod
+ def costs():
+ return fx.Effect('costs', '€', 'Kosten', is_standard=True, is_objective=True)
+
+ @staticmethod
+ def costs_with_co2_share():
+ return fx.Effect('costs', '€', 'Kosten', is_standard=True, is_objective=True, share_from_temporal={'CO2': 0.2})
+
+ @staticmethod
+ def co2():
+ return fx.Effect('CO2', 'kg', 'CO2_e-Emissionen')
+
+ @staticmethod
+ def primary_energy():
+ return fx.Effect('PE', 'kWh_PE', 'Primärenergie')
+
+
+class Converters:
+ """Energy conversion components"""
+
+ class Boilers:
+ @staticmethod
+ def simple():
+ """Simple boiler from simple_flow_system"""
+ return fx.linear_converters.Boiler(
+ 'Boiler',
+ eta=0.5,
+ Q_th=fx.Flow(
+ 'Q_th',
+ bus='Fernwärme',
+ size=50,
+ relative_minimum=5 / 50,
+ relative_maximum=1,
+ on_off_parameters=fx.OnOffParameters(),
+ ),
+ Q_fu=fx.Flow('Q_fu', bus='Gas'),
+ )
+
+ @staticmethod
+ def complex():
+ """Complex boiler with investment parameters from flow_system_complex"""
+ return fx.linear_converters.Boiler(
+ 'Kessel',
+ eta=0.5,
+ on_off_parameters=fx.OnOffParameters(effects_per_running_hour={'costs': 0, 'CO2': 1000}),
+ Q_th=fx.Flow(
+ 'Q_th',
+ bus='Fernwärme',
+ load_factor_max=1.0,
+ load_factor_min=0.1,
+ relative_minimum=5 / 50,
+ relative_maximum=1,
+ previous_flow_rate=50,
+ size=fx.InvestParameters(
+ effects_of_investment=1000,
+ fixed_size=50,
+ mandatory=True,
+ effects_of_investment_per_size={'costs': 10, 'PE': 2},
+ ),
+ on_off_parameters=fx.OnOffParameters(
+ on_hours_total_min=0,
+ on_hours_total_max=1000,
+ consecutive_on_hours_max=10,
+ consecutive_on_hours_min=1,
+ consecutive_off_hours_max=10,
+ effects_per_switch_on=0.01,
+ switch_on_total_max=1000,
+ ),
+ flow_hours_total_max=1e6,
+ ),
+ Q_fu=fx.Flow('Q_fu', bus='Gas', size=200, relative_minimum=0, relative_maximum=1),
+ )
+
+ class CHPs:
+ @staticmethod
+ def simple():
+ """Simple CHP from simple_flow_system"""
+ return fx.linear_converters.CHP(
+ 'CHP_unit',
+ eta_th=0.5,
+ eta_el=0.4,
+ P_el=fx.Flow(
+ 'P_el', bus='Strom', size=60, relative_minimum=5 / 60, on_off_parameters=fx.OnOffParameters()
+ ),
+ Q_th=fx.Flow('Q_th', bus='Fernwärme'),
+ Q_fu=fx.Flow('Q_fu', bus='Gas'),
+ )
+
+ @staticmethod
+ def base():
+ """CHP from flow_system_base"""
+ return fx.linear_converters.CHP(
+ 'KWK',
+ eta_th=0.5,
+ eta_el=0.4,
+ on_off_parameters=fx.OnOffParameters(effects_per_switch_on=0.01),
+ P_el=fx.Flow('P_el', bus='Strom', size=60, relative_minimum=5 / 60, previous_flow_rate=10),
+ Q_th=fx.Flow('Q_th', bus='Fernwärme', size=1e3),
+ Q_fu=fx.Flow('Q_fu', bus='Gas', size=1e3),
+ )
+
+ class LinearConverters:
+ @staticmethod
+ def piecewise():
+ """Piecewise converter from flow_system_piecewise_conversion"""
+ return fx.LinearConverter(
+ 'KWK',
+ inputs=[fx.Flow('Q_fu', bus='Gas')],
+ outputs=[
+ fx.Flow('P_el', bus='Strom', size=60, relative_maximum=55, previous_flow_rate=10),
+ fx.Flow('Q_th', bus='Fernwärme'),
+ ],
+ piecewise_conversion=fx.PiecewiseConversion(
+ {
+ 'P_el': fx.Piecewise([fx.Piece(5, 30), fx.Piece(40, 60)]),
+ 'Q_th': fx.Piecewise([fx.Piece(6, 35), fx.Piece(45, 100)]),
+ 'Q_fu': fx.Piecewise([fx.Piece(12, 70), fx.Piece(90, 200)]),
+ }
+ ),
+ on_off_parameters=fx.OnOffParameters(effects_per_switch_on=0.01),
+ )
+
+ @staticmethod
+ def segments(timesteps_length):
+ """Segments converter with time-varying piecewise conversion"""
+ return fx.LinearConverter(
+ 'KWK',
+ inputs=[fx.Flow('Q_fu', bus='Gas')],
+ outputs=[
+ fx.Flow('P_el', bus='Strom', size=60, relative_maximum=55, previous_flow_rate=10),
+ fx.Flow('Q_th', bus='Fernwärme'),
+ ],
+ piecewise_conversion=fx.PiecewiseConversion(
+ {
+ 'P_el': fx.Piecewise(
+ [
+ fx.Piece(np.linspace(5, 6, timesteps_length), 30),
+ fx.Piece(40, np.linspace(60, 70, timesteps_length)),
+ ]
+ ),
+ 'Q_th': fx.Piecewise([fx.Piece(6, 35), fx.Piece(45, 100)]),
+ 'Q_fu': fx.Piecewise([fx.Piece(12, 70), fx.Piece(90, 200)]),
+ }
+ ),
+ on_off_parameters=fx.OnOffParameters(effects_per_switch_on=0.01),
+ )
+
+
+class Storage:
+ """Energy storage components"""
+
+ @staticmethod
+ def simple(timesteps_length=9):
+ """Simple storage from simple_flow_system"""
+ # Create pattern [80.0, 70.0, 80.0] and repeat/slice to match timesteps_length
+ pattern = [80.0, 70.0, 80.0, 80, 80, 80, 80, 80, 80]
+ charge_state_values = (pattern * ((timesteps_length // len(pattern)) + 1))[:timesteps_length]
+
+ return fx.Storage(
+ 'Speicher',
+ charging=fx.Flow('Q_th_load', bus='Fernwärme', size=1e4),
+ discharging=fx.Flow('Q_th_unload', bus='Fernwärme', size=1e4),
+ capacity_in_flow_hours=fx.InvestParameters(effects_of_investment=20, fixed_size=30, mandatory=True),
+ initial_charge_state=0,
+ relative_maximum_charge_state=1 / 100 * np.array(charge_state_values),
+ relative_maximum_final_charge_state=0.8,
+ eta_charge=0.9,
+ eta_discharge=1,
+ relative_loss_per_hour=0.08,
+ prevent_simultaneous_charge_and_discharge=True,
+ )
+
+ @staticmethod
+ def complex():
+ """Complex storage with piecewise investment from flow_system_complex"""
+ invest_speicher = fx.InvestParameters(
+ effects_of_investment=0,
+ piecewise_effects_of_investment=fx.PiecewiseEffects(
+ piecewise_origin=fx.Piecewise([fx.Piece(5, 25), fx.Piece(25, 100)]),
+ piecewise_shares={
+ 'costs': fx.Piecewise([fx.Piece(50, 250), fx.Piece(250, 800)]),
+ 'PE': fx.Piecewise([fx.Piece(5, 25), fx.Piece(25, 100)]),
+ },
+ ),
+ mandatory=True,
+ effects_of_investment_per_size={'costs': 0.01, 'CO2': 0.01},
+ minimum_size=0,
+ maximum_size=1000,
+ )
+ return fx.Storage(
+ 'Speicher',
+ charging=fx.Flow('Q_th_load', bus='Fernwärme', size=1e4),
+ discharging=fx.Flow('Q_th_unload', bus='Fernwärme', size=1e4),
+ capacity_in_flow_hours=invest_speicher,
+ initial_charge_state=0,
+ maximal_final_charge_state=10,
+ eta_charge=0.9,
+ eta_discharge=1,
+ relative_loss_per_hour=0.08,
+ prevent_simultaneous_charge_and_discharge=True,
+ )
+
+
+class LoadProfiles:
+ """Standard load and price profiles"""
+
+ @staticmethod
+ def thermal_simple(timesteps_length=9):
+ # Create pattern and repeat/slice to match timesteps_length
+ pattern = [30.0, 0.0, 90.0, 110, 110, 20, 20, 20, 20]
+ values = (pattern * ((timesteps_length // len(pattern)) + 1))[:timesteps_length]
+ return np.array(values)
+
+ @staticmethod
+ def thermal_complex():
+ return np.array([30, 0, 90, 110, 110, 20, 20, 20, 20])
+
+ @staticmethod
+ def electrical_simple(timesteps_length=9):
+ # Create array of 80.0 repeated to match timesteps_length
+ return np.array([80.0 / 1000] * timesteps_length)
+
+ @staticmethod
+ def electrical_scenario():
+ return np.array([0.08, 0.1, 0.15])
+
+ @staticmethod
+ def electrical_complex(timesteps_length=9):
+ # Create array of 40 repeated to match timesteps_length
+ return np.array([40] * timesteps_length)
+
+ @staticmethod
+ def random_thermal(length=10, seed=42):
+ np.random.seed(seed)
+ return np.array([np.random.random() for _ in range(length)]) * 180
+
+ @staticmethod
+ def random_electrical(length=10, seed=42):
+ np.random.seed(seed)
+ return (np.array([np.random.random() for _ in range(length)]) + 0.5) / 1.5 * 50
+
+
+class Sinks:
+ """Energy sinks (loads)"""
+
+ @staticmethod
+ def heat_load(thermal_profile):
+ """Create thermal heat load sink"""
+ return fx.Sink(
+ 'Wärmelast', inputs=[fx.Flow('Q_th_Last', bus='Fernwärme', size=1, fixed_relative_profile=thermal_profile)]
+ )
+
+ @staticmethod
+ def electricity_feed_in(electrical_price_profile):
+ """Create electricity feed-in sink"""
+ return fx.Sink(
+ 'Einspeisung', inputs=[fx.Flow('P_el', bus='Strom', effects_per_flow_hour=-1 * electrical_price_profile)]
+ )
+
+ @staticmethod
+ def electricity_load(electrical_profile):
+ """Create electrical load sink (for flow_system_long)"""
+ return fx.Sink(
+ 'Stromlast', inputs=[fx.Flow('P_el_Last', bus='Strom', size=1, fixed_relative_profile=electrical_profile)]
+ )
+
+
+class Sources:
+ """Energy sources"""
+
+ @staticmethod
+ def gas_with_costs_and_co2():
+ """Standard gas tariff with CO2 emissions"""
+ source = Sources.gas_with_costs()
+ source.outputs[0].effects_per_flow_hour = {'costs': 0.04, 'CO2': 0.3}
+ return source
+
+ @staticmethod
+ def gas_with_costs():
+ """Simple gas tariff without CO2"""
+ return fx.Source(
+ 'Gastarif', outputs=[fx.Flow(label='Q_Gas', bus='Gas', size=1000, effects_per_flow_hour={'costs': 0.04})]
+ )
- if isinstance(desired, (int, float)):
- delta = abs(relative_tol * desired) if desired != 0 else absolute_tolerance
- assert np.isclose(actual, desired, atol=delta), err_msg
- else:
- np.testing.assert_allclose(actual, desired, rtol=relative_tol, atol=absolute_tolerance, err_msg=err_msg)
+
+# ============================================================================
+# RECREATED FIXTURES USING HIERARCHICAL LIBRARY
+# ============================================================================
@pytest.fixture
@@ -52,71 +400,60 @@ def simple_flow_system() -> fx.FlowSystem:
"""
Create a simple energy system for testing
"""
- base_thermal_load = np.array([30.0, 0.0, 90.0, 110, 110, 20, 20, 20, 20])
- base_electrical_price = 1 / 1000 * np.array([80.0, 80.0, 80.0, 80, 80, 80, 80, 80, 80])
base_timesteps = pd.date_range('2020-01-01', periods=9, freq='h', name='time')
+ timesteps_length = len(base_timesteps)
+ base_thermal_load = LoadProfiles.thermal_simple(timesteps_length)
+ base_electrical_price = LoadProfiles.electrical_simple(timesteps_length)
+
# Define effects
- costs = fx.Effect('costs', '€', 'Kosten', is_standard=True, is_objective=True)
- co2 = fx.Effect(
- 'CO2',
- 'kg',
- 'CO2_e-Emissionen',
- specific_share_to_other_effects_operation={costs.label: 0.2},
- maximum_operation_per_hour=1000,
- )
+ costs = Effects.costs_with_co2_share()
+ co2 = Effects.co2()
+ co2.maximum_per_hour = 1000
# Create components
- boiler = fx.linear_converters.Boiler(
- 'Boiler',
- eta=0.5,
- Q_th=fx.Flow(
- 'Q_th',
- bus='Fernwärme',
- size=50,
- relative_minimum=5 / 50,
- relative_maximum=1,
- on_off_parameters=fx.OnOffParameters(),
- ),
- Q_fu=fx.Flow('Q_fu', bus='Gas'),
- )
+ boiler = Converters.Boilers.simple()
+ chp = Converters.CHPs.simple()
+ storage = Storage.simple(timesteps_length)
+ heat_load = Sinks.heat_load(base_thermal_load)
+ gas_tariff = Sources.gas_with_costs_and_co2()
+ electricity_feed_in = Sinks.electricity_feed_in(base_electrical_price)
- chp = fx.linear_converters.CHP(
- 'CHP_unit',
- eta_th=0.5,
- eta_el=0.4,
- P_el=fx.Flow('P_el', bus='Strom', size=60, relative_minimum=5 / 60, on_off_parameters=fx.OnOffParameters()),
- Q_th=fx.Flow('Q_th', bus='Fernwärme'),
- Q_fu=fx.Flow('Q_fu', bus='Gas'),
- )
+ # Create flow system
+ flow_system = fx.FlowSystem(base_timesteps)
+ flow_system.add_elements(*Buses.defaults())
+ flow_system.add_elements(storage, costs, co2, boiler, heat_load, gas_tariff, electricity_feed_in, chp)
- storage = fx.Storage(
- 'Speicher',
- charging=fx.Flow('Q_th_load', bus='Fernwärme', size=1e4),
- discharging=fx.Flow('Q_th_unload', bus='Fernwärme', size=1e4),
- capacity_in_flow_hours=fx.InvestParameters(fix_effects=20, fixed_size=30, optional=False),
- initial_charge_state=0,
- relative_maximum_charge_state=1 / 100 * np.array([80.0, 70.0, 80.0, 80, 80, 80, 80, 80, 80, 80]),
- eta_charge=0.9,
- eta_discharge=1,
- relative_loss_per_hour=0.08,
- prevent_simultaneous_charge_and_discharge=True,
- )
+ return flow_system
- heat_load = fx.Sink(
- 'Wärmelast', sink=fx.Flow('Q_th_Last', bus='Fernwärme', size=1, fixed_relative_profile=base_thermal_load)
- )
- gas_tariff = fx.Source(
- 'Gastarif', source=fx.Flow('Q_Gas', bus='Gas', size=1000, effects_per_flow_hour={'costs': 0.04, 'CO2': 0.3})
- )
+@pytest.fixture
+def simple_flow_system_scenarios() -> fx.FlowSystem:
+ """
+ Create a simple energy system for testing
+ """
+ base_timesteps = pd.date_range('2020-01-01', periods=9, freq='h', name='time')
+ timesteps_length = len(base_timesteps)
+ base_thermal_load = LoadProfiles.thermal_simple(timesteps_length)
+ base_electrical_price = LoadProfiles.electrical_scenario()
- electricity_feed_in = fx.Sink(
- 'Einspeisung', sink=fx.Flow('P_el', bus='Strom', effects_per_flow_hour=-1 * base_electrical_price)
- )
+ # Define effects
+ costs = Effects.costs_with_co2_share()
+ co2 = Effects.co2()
+ co2.maximum_per_hour = 1000
+
+ # Create components
+ boiler = Converters.Boilers.simple()
+ chp = Converters.CHPs.simple()
+ storage = Storage.simple(timesteps_length)
+ heat_load = Sinks.heat_load(base_thermal_load)
+ gas_tariff = Sources.gas_with_costs_and_co2()
+ electricity_feed_in = Sinks.electricity_feed_in(base_electrical_price)
# Create flow system
- flow_system = fx.FlowSystem(base_timesteps)
- flow_system.add_elements(fx.Bus('Strom'), fx.Bus('Fernwärme'), fx.Bus('Gas'))
+ flow_system = fx.FlowSystem(
+ base_timesteps, scenarios=pd.Index(['A', 'B', 'C']), weights=np.array([0.5, 0.25, 0.25])
+ )
+ flow_system.add_elements(*Buses.defaults())
flow_system.add_elements(storage, costs, co2, boiler, heat_load, gas_tariff, electricity_feed_in, chp)
return flow_system
@@ -126,18 +463,17 @@ def simple_flow_system() -> fx.FlowSystem:
def basic_flow_system() -> fx.FlowSystem:
"""Create basic elements for component testing"""
flow_system = fx.FlowSystem(pd.date_range('2020-01-01', periods=10, freq='h', name='time'))
- thermal_load = np.array([np.random.random() for _ in range(10)]) * 180
- p_el = (np.array([np.random.random() for _ in range(10)]) + 0.5) / 1.5 * 50
- flow_system.add_elements(
- fx.Bus('Strom'),
- fx.Bus('Fernwärme'),
- fx.Bus('Gas'),
- fx.Effect('Costs', '€', 'Kosten', is_standard=True, is_objective=True),
- fx.Sink('Wärmelast', sink=fx.Flow('Q_th_Last', 'Fernwärme', size=1, fixed_relative_profile=thermal_load)),
- fx.Source('Gastarif', source=fx.Flow('Q_Gas', 'Gas', size=1000, effects_per_flow_hour=0.04)),
- fx.Sink('Einspeisung', sink=fx.Flow('P_el', 'Strom', effects_per_flow_hour=-1 * p_el)),
- )
+ thermal_load = LoadProfiles.random_thermal(10)
+ p_el = LoadProfiles.random_electrical(10)
+
+ costs = Effects.costs()
+ heat_load = Sinks.heat_load(thermal_load)
+ gas_source = Sources.gas_with_costs()
+ electricity_sink = Sinks.electricity_feed_in(p_el)
+
+ flow_system.add_elements(*Buses.defaults())
+ flow_system.add_elements(costs, heat_load, gas_source, electricity_sink)
return flow_system
@@ -147,79 +483,26 @@ def flow_system_complex() -> fx.FlowSystem:
"""
Helper method to create a base model with configurable parameters
"""
- thermal_load = np.array([30, 0, 90, 110, 110, 20, 20, 20, 20])
- electrical_load = np.array([40, 40, 40, 40, 40, 40, 40, 40, 40])
+ thermal_load = LoadProfiles.thermal_complex()
+ electrical_load = LoadProfiles.electrical_complex()
flow_system = fx.FlowSystem(pd.date_range('2020-01-01', periods=9, freq='h', name='time'))
+
# Define the components and flow_system
- flow_system.add_elements(
- fx.Effect('costs', '€', 'Kosten', is_standard=True, is_objective=True),
- fx.Effect('CO2', 'kg', 'CO2_e-Emissionen', specific_share_to_other_effects_operation={'costs': 0.2}),
- fx.Effect('PE', 'kWh_PE', 'Primärenergie', maximum_total=3.5e3),
- fx.Bus('Strom'),
- fx.Bus('Fernwärme'),
- fx.Bus('Gas'),
- fx.Sink('Wärmelast', sink=fx.Flow('Q_th_Last', 'Fernwärme', size=1, fixed_relative_profile=thermal_load)),
- fx.Source(
- 'Gastarif', source=fx.Flow('Q_Gas', 'Gas', size=1000, effects_per_flow_hour={'costs': 0.04, 'CO2': 0.3})
- ),
- fx.Sink('Einspeisung', sink=fx.Flow('P_el', 'Strom', effects_per_flow_hour=-1 * electrical_load)),
- )
+ costs = Effects.costs()
+ co2 = Effects.co2()
+ costs.share_from_temporal = {'CO2': 0.2}
+ pe = Effects.primary_energy()
+ pe.maximum_total = 3.5e3
- boiler = fx.linear_converters.Boiler(
- 'Kessel',
- eta=0.5,
- on_off_parameters=fx.OnOffParameters(effects_per_running_hour={'costs': 0, 'CO2': 1000}),
- Q_th=fx.Flow(
- 'Q_th',
- bus='Fernwärme',
- load_factor_max=1.0,
- load_factor_min=0.1,
- relative_minimum=5 / 50,
- relative_maximum=1,
- previous_flow_rate=50,
- size=fx.InvestParameters(
- fix_effects=1000, fixed_size=50, optional=False, specific_effects={'costs': 10, 'PE': 2}
- ),
- on_off_parameters=fx.OnOffParameters(
- on_hours_total_min=0,
- on_hours_total_max=1000,
- consecutive_on_hours_max=10,
- consecutive_on_hours_min=1,
- consecutive_off_hours_max=10,
- effects_per_switch_on=0.01,
- switch_on_total_max=1000,
- ),
- flow_hours_total_max=1e6,
- ),
- Q_fu=fx.Flow('Q_fu', bus='Gas', size=200, relative_minimum=0, relative_maximum=1),
- )
+ heat_load = Sinks.heat_load(thermal_load)
+ gas_tariff = Sources.gas_with_costs_and_co2()
+ electricity_feed_in = Sinks.electricity_feed_in(electrical_load)
- invest_speicher = fx.InvestParameters(
- fix_effects=0,
- piecewise_effects=fx.PiecewiseEffects(
- piecewise_origin=fx.Piecewise([fx.Piece(5, 25), fx.Piece(25, 100)]),
- piecewise_shares={
- 'costs': fx.Piecewise([fx.Piece(50, 250), fx.Piece(250, 800)]),
- 'PE': fx.Piecewise([fx.Piece(5, 25), fx.Piece(25, 100)]),
- },
- ),
- optional=False,
- specific_effects={'costs': 0.01, 'CO2': 0.01},
- minimum_size=0,
- maximum_size=1000,
- )
- speicher = fx.Storage(
- 'Speicher',
- charging=fx.Flow('Q_th_load', bus='Fernwärme', size=1e4),
- discharging=fx.Flow('Q_th_unload', bus='Fernwärme', size=1e4),
- capacity_in_flow_hours=invest_speicher,
- initial_charge_state=0,
- maximal_final_charge_state=10,
- eta_charge=0.9,
- eta_discharge=1,
- relative_loss_per_hour=0.08,
- prevent_simultaneous_charge_and_discharge=True,
- )
+ flow_system.add_elements(*Buses.defaults())
+ flow_system.add_elements(costs, co2, pe, heat_load, gas_tariff, electricity_feed_in)
+
+ boiler = Converters.Boilers.complex()
+ speicher = Storage.complex()
flow_system.add_elements(boiler, speicher)
@@ -232,45 +515,16 @@ def flow_system_base(flow_system_complex) -> fx.FlowSystem:
Helper method to create a base model with configurable parameters
"""
flow_system = flow_system_complex
-
- flow_system.add_elements(
- fx.linear_converters.CHP(
- 'KWK',
- eta_th=0.5,
- eta_el=0.4,
- on_off_parameters=fx.OnOffParameters(effects_per_switch_on=0.01),
- P_el=fx.Flow('P_el', bus='Strom', size=60, relative_minimum=5 / 60, previous_flow_rate=10),
- Q_th=fx.Flow('Q_th', bus='Fernwärme', size=1e3),
- Q_fu=fx.Flow('Q_fu', bus='Gas', size=1e3),
- )
- )
-
+ chp = Converters.CHPs.base()
+ flow_system.add_elements(chp)
return flow_system
@pytest.fixture
def flow_system_piecewise_conversion(flow_system_complex) -> fx.FlowSystem:
flow_system = flow_system_complex
-
- flow_system.add_elements(
- fx.LinearConverter(
- 'KWK',
- inputs=[fx.Flow('Q_fu', bus='Gas')],
- outputs=[
- fx.Flow('P_el', bus='Strom', size=60, relative_maximum=55, previous_flow_rate=10),
- fx.Flow('Q_th', bus='Fernwärme'),
- ],
- piecewise_conversion=fx.PiecewiseConversion(
- {
- 'P_el': fx.Piecewise([fx.Piece(5, 30), fx.Piece(40, 60)]),
- 'Q_th': fx.Piecewise([fx.Piece(6, 35), fx.Piece(45, 100)]),
- 'Q_fu': fx.Piecewise([fx.Piece(12, 70), fx.Piece(90, 200)]),
- }
- ),
- on_off_parameters=fx.OnOffParameters(effects_per_switch_on=0.01),
- )
- )
-
+ converter = Converters.LinearConverters.piecewise()
+ flow_system.add_elements(converter)
return flow_system
@@ -280,38 +534,16 @@ def flow_system_segments_of_flows_2(flow_system_complex) -> fx.FlowSystem:
Use segments/Piecewise with numeric data
"""
flow_system = flow_system_complex
-
- flow_system.add_elements(
- fx.LinearConverter(
- 'KWK',
- inputs=[fx.Flow('Q_fu', bus='Gas')],
- outputs=[
- fx.Flow('P_el', bus='Strom', size=60, relative_maximum=55, previous_flow_rate=10),
- fx.Flow('Q_th', bus='Fernwärme'),
- ],
- piecewise_conversion=fx.PiecewiseConversion(
- {
- 'P_el': fx.Piecewise(
- [
- fx.Piece(np.linspace(5, 6, len(flow_system.time_series_collection.timesteps)), 30),
- fx.Piece(40, np.linspace(60, 70, len(flow_system.time_series_collection.timesteps))),
- ]
- ),
- 'Q_th': fx.Piecewise([fx.Piece(6, 35), fx.Piece(45, 100)]),
- 'Q_fu': fx.Piecewise([fx.Piece(12, 70), fx.Piece(90, 200)]),
- }
- ),
- on_off_parameters=fx.OnOffParameters(effects_per_switch_on=0.01),
- )
- )
-
+ converter = Converters.LinearConverters.segments(len(flow_system.timesteps))
+ flow_system.add_elements(converter)
return flow_system
@pytest.fixture
def flow_system_long():
"""
- Fixture to create and return the flow system with loaded data
+ Special fixture with CSV data loading - kept separate for backward compatibility
+ Uses library components where possible, but has special elements inline
"""
# Load data
filename = os.path.join(os.path.dirname(__file__), 'ressources', 'Zeitreihen2020.csv')
@@ -326,38 +558,38 @@ def flow_system_long():
thermal_load_ts, electrical_load_ts = (
fx.TimeSeriesData(thermal_load),
- fx.TimeSeriesData(electrical_load, agg_weight=0.7),
+ fx.TimeSeriesData(electrical_load, aggregation_weight=0.7),
)
p_feed_in, p_sell = (
- fx.TimeSeriesData(-(p_el - 0.5), agg_group='p_el'),
- fx.TimeSeriesData(p_el + 0.5, agg_group='p_el'),
+ fx.TimeSeriesData(-(p_el - 0.5), aggregation_group='p_el'),
+ fx.TimeSeriesData(p_el + 0.5, aggregation_group='p_el'),
)
flow_system = fx.FlowSystem(pd.DatetimeIndex(data.index))
flow_system.add_elements(
- fx.Bus('Strom'),
- fx.Bus('Fernwärme'),
- fx.Bus('Gas'),
- fx.Bus('Kohle'),
- fx.Effect('costs', '€', 'Kosten', is_standard=True, is_objective=True),
- fx.Effect('CO2', 'kg', 'CO2_e-Emissionen'),
- fx.Effect('PE', 'kWh_PE', 'Primärenergie'),
+ *Buses.defaults(),
+ Buses.coal(),
+ Effects.costs(),
+ Effects.co2(),
+ Effects.primary_energy(),
fx.Sink(
- 'Wärmelast', sink=fx.Flow('Q_th_Last', bus='Fernwärme', size=1, fixed_relative_profile=thermal_load_ts)
+ 'Wärmelast', inputs=[fx.Flow('Q_th_Last', bus='Fernwärme', size=1, fixed_relative_profile=thermal_load_ts)]
+ ),
+ fx.Sink(
+ 'Stromlast', inputs=[fx.Flow('P_el_Last', bus='Strom', size=1, fixed_relative_profile=electrical_load_ts)]
),
- fx.Sink('Stromlast', sink=fx.Flow('P_el_Last', bus='Strom', size=1, fixed_relative_profile=electrical_load_ts)),
fx.Source(
'Kohletarif',
- source=fx.Flow('Q_Kohle', bus='Kohle', size=1000, effects_per_flow_hour={'costs': 4.6, 'CO2': 0.3}),
+ outputs=[fx.Flow('Q_Kohle', bus='Kohle', size=1000, effects_per_flow_hour={'costs': 4.6, 'CO2': 0.3})],
),
fx.Source(
'Gastarif',
- source=fx.Flow('Q_Gas', bus='Gas', size=1000, effects_per_flow_hour={'costs': gas_price, 'CO2': 0.3}),
+ outputs=[fx.Flow('Q_Gas', bus='Gas', size=1000, effects_per_flow_hour={'costs': gas_price, 'CO2': 0.3})],
),
- fx.Sink('Einspeisung', sink=fx.Flow('P_el', bus='Strom', size=1000, effects_per_flow_hour=p_feed_in)),
+ fx.Sink('Einspeisung', inputs=[fx.Flow('P_el', bus='Strom', size=1000, effects_per_flow_hour=p_feed_in)]),
fx.Source(
'Stromtarif',
- source=fx.Flow('P_el', bus='Strom', size=1000, effects_per_flow_hour={'costs': p_sell, 'CO2': 0.3}),
+ outputs=[fx.Flow('P_el', bus='Strom', size=1000, effects_per_flow_hour={'costs': p_sell, 'CO2': 0.3})],
),
)
@@ -406,6 +638,71 @@ def flow_system_long():
}
+@pytest.fixture(params=['h', '3h'], ids=['hourly', '3-hourly'])
+def timesteps_linopy(request):
+ return pd.date_range('2020-01-01', periods=10, freq=request.param, name='time')
+
+
+@pytest.fixture
+def basic_flow_system_linopy(timesteps_linopy) -> fx.FlowSystem:
+ """Create basic elements for component testing"""
+ flow_system = fx.FlowSystem(timesteps_linopy)
+
+ n = len(flow_system.timesteps)
+ thermal_load = LoadProfiles.random_thermal(n)
+ p_el = LoadProfiles.random_electrical(n)
+
+ costs = Effects.costs()
+ heat_load = Sinks.heat_load(thermal_load)
+ gas_source = Sources.gas_with_costs()
+ electricity_sink = Sinks.electricity_feed_in(p_el)
+
+ flow_system.add_elements(*Buses.defaults())
+ flow_system.add_elements(costs, heat_load, gas_source, electricity_sink)
+
+ return flow_system
+
+
+@pytest.fixture
+def basic_flow_system_linopy_coords(coords_config) -> fx.FlowSystem:
+ """Create basic elements for component testing with coordinate parametrization."""
+ flow_system = fx.FlowSystem(**coords_config)
+
+ thermal_load = LoadProfiles.random_thermal(10)
+ p_el = LoadProfiles.random_electrical(10)
+
+ costs = Effects.costs()
+ heat_load = Sinks.heat_load(thermal_load)
+ gas_source = Sources.gas_with_costs()
+ electricity_sink = Sinks.electricity_feed_in(p_el)
+
+ flow_system.add_elements(*Buses.defaults())
+ flow_system.add_elements(costs, heat_load, gas_source, electricity_sink)
+
+ return flow_system
+
+
+# ============================================================================
+# UTILITY FUNCTIONS (kept for backward compatibility)
+# ============================================================================
+
+
+# Custom assertion function
+def assert_almost_equal_numeric(
+ actual, desired, err_msg, relative_error_range_in_percent=0.011, absolute_tolerance=1e-7
+):
+ """
+ Custom assertion function for comparing numeric values with relative and absolute tolerances
+ """
+ relative_tol = relative_error_range_in_percent / 100
+
+ if isinstance(desired, (int, float)):
+ delta = abs(relative_tol * desired) if desired != 0 else absolute_tolerance
+ assert np.isclose(actual, desired, atol=delta), err_msg
+ else:
+ np.testing.assert_allclose(actual, desired, rtol=relative_tol, atol=absolute_tolerance, err_msg=err_msg)
+
+
def create_calculation_and_solve(
flow_system: fx.FlowSystem, solver, name: str, allow_infeasible: bool = False
) -> fx.FullCalculation:
@@ -421,37 +718,21 @@ def create_calculation_and_solve(
return calculation
-def create_linopy_model(flow_system: fx.FlowSystem) -> SystemModel:
+def create_linopy_model(flow_system: fx.FlowSystem) -> FlowSystemModel:
+ """
+ Create a FlowSystemModel from a FlowSystem by performing the modeling phase.
+
+ Args:
+ flow_system: The FlowSystem to build the model from.
+
+ Returns:
+ FlowSystemModel: The built model from FullCalculation.do_modeling().
+ """
calculation = fx.FullCalculation('GenericName', flow_system)
calculation.do_modeling()
return calculation.model
-@pytest.fixture(params=['h', '3h'])
-def timesteps_linopy(request):
- return pd.date_range('2020-01-01', periods=10, freq=request.param, name='time')
-
-
-@pytest.fixture
-def basic_flow_system_linopy(timesteps_linopy) -> fx.FlowSystem:
- """Create basic elements for component testing"""
- flow_system = fx.FlowSystem(timesteps_linopy)
- thermal_load = np.array([np.random.random() for _ in range(10)]) * 180
- p_el = (np.array([np.random.random() for _ in range(10)]) + 0.5) / 1.5 * 50
-
- flow_system.add_elements(
- fx.Bus('Strom'),
- fx.Bus('Fernwärme'),
- fx.Bus('Gas'),
- fx.Effect('Costs', '€', 'Kosten', is_standard=True, is_objective=True),
- fx.Sink('Wärmelast', sink=fx.Flow('Q_th_Last', 'Fernwärme', size=1, fixed_relative_profile=thermal_load)),
- fx.Source('Gastarif', source=fx.Flow('Q_Gas', 'Gas', size=1000, effects_per_flow_hour=0.04)),
- fx.Sink('Einspeisung', sink=fx.Flow('P_el', 'Strom', effects_per_flow_hour=-1 * p_el)),
- )
-
- return flow_system
-
-
def assert_conequal(actual: linopy.Constraint, desired: linopy.Constraint):
"""Assert that two constraints are equal with detailed error messages."""
@@ -506,3 +787,65 @@ def assert_var_equal(actual: linopy.Variable, desired: linopy.Variable):
if actual.coord_dims != desired.coord_dims:
raise AssertionError(f"{name} coordinate dimensions don't match: {actual.coord_dims} != {desired.coord_dims}")
+
+
+def assert_sets_equal(set1: Iterable, set2: Iterable, msg=''):
+ """Assert two sets are equal with custom error message."""
+ set1, set2 = set(set1), set(set2)
+
+ extra = set1 - set2
+ missing = set2 - set1
+
+ if extra or missing:
+ parts = []
+ if extra:
+ parts.append(f'Extra: {sorted(extra, key=repr)}')
+ if missing:
+ parts.append(f'Missing: {sorted(missing, key=repr)}')
+
+ error_msg = ', '.join(parts)
+ if msg:
+ error_msg = f'{msg}: {error_msg}'
+
+ raise AssertionError(error_msg)
+
+
+# ============================================================================
+# PLOTTING CLEANUP FIXTURES
+# ============================================================================
+
+
+@pytest.fixture(autouse=True)
+def cleanup_figures():
+ """
+ Cleanup matplotlib figures after each test.
+
+ This fixture runs automatically after every test to:
+ - Close all matplotlib figures to prevent memory leaks
+ """
+ yield
+ # Close all matplotlib figures
+ import matplotlib.pyplot as plt
+
+ plt.close('all')
+
+
+@pytest.fixture(scope='session', autouse=True)
+def set_test_environment():
+ """
+ Configure plotting for test environment.
+
+ This fixture runs once per test session to:
+ - Set matplotlib to use non-interactive 'Agg' backend
+ - Set plotly to use non-interactive 'json' renderer
+ - Prevent GUI windows from opening during tests
+ """
+ import matplotlib
+
+ matplotlib.use('Agg') # Use non-interactive backend
+
+ import plotly.io as pio
+
+ pio.renderers.default = 'json' # Use non-interactive renderer
+
+ yield
diff --git a/tests/run_all_tests.py b/tests/run_all_tests.py
index 5597a47f3..83b6dfacf 100644
--- a/tests/run_all_tests.py
+++ b/tests/run_all_tests.py
@@ -7,4 +7,4 @@
import pytest
if __name__ == '__main__':
- pytest.main(['test_functional.py', '--disable-warnings'])
+ pytest.main(['test_integration.py', '--disable-warnings'])
diff --git a/tests/test_bus.py b/tests/test_bus.py
index 2462ab14f..0a5b19d8d 100644
--- a/tests/test_bus.py
+++ b/tests/test_bus.py
@@ -6,47 +6,50 @@
class TestBusModel:
"""Test the FlowModel class."""
- def test_bus(self, basic_flow_system_linopy):
+ def test_bus(self, basic_flow_system_linopy_coords, coords_config):
"""Test that flow model constraints are correctly generated."""
- flow_system = basic_flow_system_linopy
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
bus = fx.Bus('TestBus', excess_penalty_per_flow_hour=None)
flow_system.add_elements(
bus,
- fx.Sink('WärmelastTest', sink=fx.Flow('Q_th_Last', 'TestBus')),
- fx.Source('GastarifTest', source=fx.Flow('Q_Gas', 'TestBus')),
+ fx.Sink('WärmelastTest', inputs=[fx.Flow('Q_th_Last', 'TestBus')]),
+ fx.Source('GastarifTest', outputs=[fx.Flow('Q_Gas', 'TestBus')]),
)
model = create_linopy_model(flow_system)
- assert set(bus.model.variables) == {'WärmelastTest(Q_th_Last)|flow_rate', 'GastarifTest(Q_Gas)|flow_rate'}
- assert set(bus.model.constraints) == {'TestBus|balance'}
+ assert set(bus.submodel.variables) == {'WärmelastTest(Q_th_Last)|flow_rate', 'GastarifTest(Q_Gas)|flow_rate'}
+ assert set(bus.submodel.constraints) == {'TestBus|balance'}
assert_conequal(
model.constraints['TestBus|balance'],
model.variables['GastarifTest(Q_Gas)|flow_rate'] == model.variables['WärmelastTest(Q_th_Last)|flow_rate'],
)
- def test_bus_penalty(self, basic_flow_system_linopy):
+ def test_bus_penalty(self, basic_flow_system_linopy_coords, coords_config):
"""Test that flow model constraints are correctly generated."""
- flow_system = basic_flow_system_linopy
- timesteps = flow_system.time_series_collection.timesteps
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
bus = fx.Bus('TestBus')
flow_system.add_elements(
bus,
- fx.Sink('WärmelastTest', sink=fx.Flow('Q_th_Last', 'TestBus')),
- fx.Source('GastarifTest', source=fx.Flow('Q_Gas', 'TestBus')),
+ fx.Sink('WärmelastTest', inputs=[fx.Flow('Q_th_Last', 'TestBus')]),
+ fx.Source('GastarifTest', outputs=[fx.Flow('Q_Gas', 'TestBus')]),
)
model = create_linopy_model(flow_system)
- assert set(bus.model.variables) == {
+ assert set(bus.submodel.variables) == {
'TestBus|excess_input',
'TestBus|excess_output',
'WärmelastTest(Q_th_Last)|flow_rate',
'GastarifTest(Q_Gas)|flow_rate',
}
- assert set(bus.model.constraints) == {'TestBus|balance'}
+ assert set(bus.submodel.constraints) == {'TestBus|balance'}
- assert_var_equal(model.variables['TestBus|excess_input'], model.add_variables(lower=0, coords=(timesteps,)))
- assert_var_equal(model.variables['TestBus|excess_output'], model.add_variables(lower=0, coords=(timesteps,)))
+ assert_var_equal(
+ model.variables['TestBus|excess_input'], model.add_variables(lower=0, coords=model.get_coords())
+ )
+ assert_var_equal(
+ model.variables['TestBus|excess_output'], model.add_variables(lower=0, coords=model.get_coords())
+ )
assert_conequal(
model.constraints['TestBus|balance'],
@@ -63,3 +66,29 @@ def test_bus_penalty(self, basic_flow_system_linopy):
== (model.variables['TestBus|excess_input'] * 1e5 * model.hours_per_step).sum()
+ (model.variables['TestBus|excess_output'] * 1e5 * model.hours_per_step).sum(),
)
+
+ def test_bus_with_coords(self, basic_flow_system_linopy_coords, coords_config):
+ """Test bus behavior across different coordinate configurations."""
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
+ bus = fx.Bus('TestBus', excess_penalty_per_flow_hour=None)
+ flow_system.add_elements(
+ bus,
+ fx.Sink('WärmelastTest', inputs=[fx.Flow('Q_th_Last', 'TestBus')]),
+ fx.Source('GastarifTest', outputs=[fx.Flow('Q_Gas', 'TestBus')]),
+ )
+ model = create_linopy_model(flow_system)
+
+ # Same core assertions as your existing test
+ assert set(bus.submodel.variables) == {'WärmelastTest(Q_th_Last)|flow_rate', 'GastarifTest(Q_Gas)|flow_rate'}
+ assert set(bus.submodel.constraints) == {'TestBus|balance'}
+
+ assert_conequal(
+ model.constraints['TestBus|balance'],
+ model.variables['GastarifTest(Q_Gas)|flow_rate'] == model.variables['WärmelastTest(Q_th_Last)|flow_rate'],
+ )
+
+ # Just verify coordinate dimensions are correct
+ gas_var = model.variables['GastarifTest(Q_Gas)|flow_rate']
+ if flow_system.scenarios is not None:
+ assert 'scenario' in gas_var.dims
+ assert 'time' in gas_var.dims
diff --git a/tests/test_component.py b/tests/test_component.py
index 47a6219da..be1eecf3b 100644
--- a/tests/test_component.py
+++ b/tests/test_component.py
@@ -4,13 +4,19 @@
import flixopt as fx
import flixopt.elements
-from .conftest import assert_conequal, assert_var_equal, create_linopy_model
+from .conftest import (
+ assert_almost_equal_numeric,
+ assert_conequal,
+ assert_sets_equal,
+ assert_var_equal,
+ create_calculation_and_solve,
+ create_linopy_model,
+)
class TestComponentModel:
- def test_flow_label_check(self, basic_flow_system_linopy):
+ def test_flow_label_check(self):
"""Test that flow model constraints are correctly generated."""
- _ = basic_flow_system_linopy
inputs = [
fx.Flow('Q_th_Last', 'Fernwärme', relative_minimum=np.ones(10) * 0.1),
fx.Flow('Q_Gas', 'Fernwärme', relative_minimum=np.ones(10) * 0.1),
@@ -22,9 +28,9 @@ def test_flow_label_check(self, basic_flow_system_linopy):
with pytest.raises(ValueError, match='Flow names must be unique!'):
_ = flixopt.elements.Component('TestComponent', inputs=inputs, outputs=outputs)
- def test_component(self, basic_flow_system_linopy):
+ def test_component(self, basic_flow_system_linopy_coords, coords_config):
"""Test that flow model constraints are correctly generated."""
- flow_system = basic_flow_system_linopy
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
inputs = [
fx.Flow('In1', 'Fernwärme', relative_minimum=np.ones(10) * 0.1),
fx.Flow('In2', 'Fernwärme', relative_minimum=np.ones(10) * 0.1),
@@ -37,28 +43,36 @@ def test_component(self, basic_flow_system_linopy):
flow_system.add_elements(comp)
_ = create_linopy_model(flow_system)
- assert {
- 'TestComponent(In1)|flow_rate',
- 'TestComponent(In1)|total_flow_hours',
- 'TestComponent(In2)|flow_rate',
- 'TestComponent(In2)|total_flow_hours',
- 'TestComponent(Out1)|flow_rate',
- 'TestComponent(Out1)|total_flow_hours',
- 'TestComponent(Out2)|flow_rate',
- 'TestComponent(Out2)|total_flow_hours',
- } == set(comp.model.variables)
-
- assert {
- 'TestComponent(In1)|total_flow_hours',
- 'TestComponent(In2)|total_flow_hours',
- 'TestComponent(Out1)|total_flow_hours',
- 'TestComponent(Out2)|total_flow_hours',
- } == set(comp.model.constraints)
-
- def test_on_with_multiple_flows(self, basic_flow_system_linopy):
+ assert_sets_equal(
+ set(comp.submodel.variables),
+ {
+ 'TestComponent(In1)|flow_rate',
+ 'TestComponent(In1)|total_flow_hours',
+ 'TestComponent(In2)|flow_rate',
+ 'TestComponent(In2)|total_flow_hours',
+ 'TestComponent(Out1)|flow_rate',
+ 'TestComponent(Out1)|total_flow_hours',
+ 'TestComponent(Out2)|flow_rate',
+ 'TestComponent(Out2)|total_flow_hours',
+ },
+ msg='Incorrect variables',
+ )
+
+ assert_sets_equal(
+ set(comp.submodel.constraints),
+ {
+ 'TestComponent(In1)|total_flow_hours',
+ 'TestComponent(In2)|total_flow_hours',
+ 'TestComponent(Out1)|total_flow_hours',
+ 'TestComponent(Out2)|total_flow_hours',
+ },
+ msg='Incorrect constraints',
+ )
+
+ def test_on_with_multiple_flows(self, basic_flow_system_linopy_coords, coords_config):
"""Test that flow model constraints are correctly generated."""
- flow_system = basic_flow_system_linopy
- timesteps = flow_system.time_series_collection.timesteps
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
+
ub_out2 = np.linspace(1, 1.5, 10).round(2)
inputs = [
fx.Flow('In1', 'Fernwärme', relative_minimum=np.ones(10) * 0.1, size=100),
@@ -73,81 +87,94 @@ def test_on_with_multiple_flows(self, basic_flow_system_linopy):
flow_system.add_elements(comp)
model = create_linopy_model(flow_system)
- assert {
- 'TestComponent(In1)|flow_rate',
- 'TestComponent(In1)|total_flow_hours',
- 'TestComponent(In1)|on',
- 'TestComponent(In1)|on_hours_total',
- 'TestComponent(Out1)|flow_rate',
- 'TestComponent(Out1)|total_flow_hours',
- 'TestComponent(Out1)|on',
- 'TestComponent(Out1)|on_hours_total',
- 'TestComponent(Out2)|flow_rate',
- 'TestComponent(Out2)|total_flow_hours',
- 'TestComponent(Out2)|on',
- 'TestComponent(Out2)|on_hours_total',
- 'TestComponent|on',
- 'TestComponent|on_hours_total',
- } == set(comp.model.variables)
-
- assert {
- 'TestComponent(In1)|total_flow_hours',
- 'TestComponent(In1)|on_con1',
- 'TestComponent(In1)|on_con2',
- 'TestComponent(In1)|on_hours_total',
- 'TestComponent(Out1)|total_flow_hours',
- 'TestComponent(Out1)|on_con1',
- 'TestComponent(Out1)|on_con2',
- 'TestComponent(Out1)|on_hours_total',
- 'TestComponent(Out2)|total_flow_hours',
- 'TestComponent(Out2)|on_con1',
- 'TestComponent(Out2)|on_con2',
- 'TestComponent(Out2)|on_hours_total',
- 'TestComponent|on_con1',
- 'TestComponent|on_con2',
- 'TestComponent|on_hours_total',
- } == set(comp.model.constraints)
+ assert_sets_equal(
+ set(comp.submodel.variables),
+ {
+ 'TestComponent(In1)|flow_rate',
+ 'TestComponent(In1)|total_flow_hours',
+ 'TestComponent(In1)|on',
+ 'TestComponent(In1)|on_hours_total',
+ 'TestComponent(Out1)|flow_rate',
+ 'TestComponent(Out1)|total_flow_hours',
+ 'TestComponent(Out1)|on',
+ 'TestComponent(Out1)|on_hours_total',
+ 'TestComponent(Out2)|flow_rate',
+ 'TestComponent(Out2)|total_flow_hours',
+ 'TestComponent(Out2)|on',
+ 'TestComponent(Out2)|on_hours_total',
+ 'TestComponent|on',
+ 'TestComponent|on_hours_total',
+ },
+ msg='Incorrect variables',
+ )
+
+ assert_sets_equal(
+ set(comp.submodel.constraints),
+ {
+ 'TestComponent(In1)|total_flow_hours',
+ 'TestComponent(In1)|flow_rate|lb',
+ 'TestComponent(In1)|flow_rate|ub',
+ 'TestComponent(In1)|on_hours_total',
+ 'TestComponent(Out1)|total_flow_hours',
+ 'TestComponent(Out1)|flow_rate|lb',
+ 'TestComponent(Out1)|flow_rate|ub',
+ 'TestComponent(Out1)|on_hours_total',
+ 'TestComponent(Out2)|total_flow_hours',
+ 'TestComponent(Out2)|flow_rate|lb',
+ 'TestComponent(Out2)|flow_rate|ub',
+ 'TestComponent(Out2)|on_hours_total',
+ 'TestComponent|on|lb',
+ 'TestComponent|on|ub',
+ 'TestComponent|on_hours_total',
+ },
+ msg='Incorrect constraints',
+ )
+
+ upper_bound_flow_rate = outputs[1].relative_maximum
+
+ assert upper_bound_flow_rate.dims == tuple(model.get_coords())
assert_var_equal(
model['TestComponent(Out2)|flow_rate'],
- model.add_variables(lower=0, upper=300 * ub_out2, coords=(timesteps,)),
+ model.add_variables(lower=0, upper=300 * upper_bound_flow_rate, coords=model.get_coords()),
)
- assert_var_equal(model['TestComponent|on'], model.add_variables(binary=True, coords=(timesteps,)))
- assert_var_equal(model['TestComponent(Out2)|on'], model.add_variables(binary=True, coords=(timesteps,)))
+ assert_var_equal(model['TestComponent|on'], model.add_variables(binary=True, coords=model.get_coords()))
+ assert_var_equal(model['TestComponent(Out2)|on'], model.add_variables(binary=True, coords=model.get_coords()))
assert_conequal(
- model.constraints['TestComponent(Out2)|on_con1'],
- model.variables['TestComponent(Out2)|on'] * 0.3 * 300 <= model.variables['TestComponent(Out2)|flow_rate'],
+ model.constraints['TestComponent(Out2)|flow_rate|lb'],
+ model.variables['TestComponent(Out2)|flow_rate'] >= model.variables['TestComponent(Out2)|on'] * 0.3 * 300,
)
assert_conequal(
- model.constraints['TestComponent(Out2)|on_con2'],
- model.variables['TestComponent(Out2)|on'] * 300 * ub_out2
- >= model.variables['TestComponent(Out2)|flow_rate'],
+ model.constraints['TestComponent(Out2)|flow_rate|ub'],
+ model.variables['TestComponent(Out2)|flow_rate']
+ <= model.variables['TestComponent(Out2)|on'] * 300 * upper_bound_flow_rate,
)
assert_conequal(
- model.constraints['TestComponent|on_con1'],
- model.variables['TestComponent|on'] * 1e-5
- <= model.variables['TestComponent(In1)|flow_rate']
- + model.variables['TestComponent(Out1)|flow_rate']
- + model.variables['TestComponent(Out2)|flow_rate'],
+ model.constraints['TestComponent|on|lb'],
+ model.variables['TestComponent|on']
+ >= (
+ model.variables['TestComponent(In1)|on']
+ + model.variables['TestComponent(Out1)|on']
+ + model.variables['TestComponent(Out2)|on']
+ )
+ / (3 + 1e-5),
)
- # TODO: Might there be a better way to no use 1e-5?
assert_conequal(
- model.constraints['TestComponent|on_con2'],
- model.variables['TestComponent|on'] * (100 + 200 + 300 * ub_out2) / 3
- >= (
- model.variables['TestComponent(In1)|flow_rate']
- + model.variables['TestComponent(Out1)|flow_rate']
- + model.variables['TestComponent(Out2)|flow_rate']
+ model.constraints['TestComponent|on|ub'],
+ model.variables['TestComponent|on']
+ <= (
+ model.variables['TestComponent(In1)|on']
+ + model.variables['TestComponent(Out1)|on']
+ + model.variables['TestComponent(Out2)|on']
)
- / 3,
+ + 1e-5,
)
- def test_on_with_single_flow(self, basic_flow_system_linopy):
+ def test_on_with_single_flow(self, basic_flow_system_linopy_coords, coords_config):
"""Test that flow model constraints are correctly generated."""
- flow_system = basic_flow_system_linopy
- timesteps = flow_system.time_series_collection.timesteps
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
inputs = [
fx.Flow('In1', 'Fernwärme', relative_minimum=np.ones(10) * 0.1, size=100),
]
@@ -158,45 +185,417 @@ def test_on_with_single_flow(self, basic_flow_system_linopy):
flow_system.add_elements(comp)
model = create_linopy_model(flow_system)
- assert {
- 'TestComponent(In1)|flow_rate',
- 'TestComponent(In1)|total_flow_hours',
- 'TestComponent(In1)|on',
- 'TestComponent(In1)|on_hours_total',
- 'TestComponent|on',
- 'TestComponent|on_hours_total',
- } == set(comp.model.variables)
-
- assert {
- 'TestComponent(In1)|total_flow_hours',
- 'TestComponent(In1)|on_con1',
- 'TestComponent(In1)|on_con2',
- 'TestComponent(In1)|on_hours_total',
- 'TestComponent|on_con1',
- 'TestComponent|on_con2',
- 'TestComponent|on_hours_total',
- } == set(comp.model.constraints)
+ assert_sets_equal(
+ set(comp.submodel.variables),
+ {
+ 'TestComponent(In1)|flow_rate',
+ 'TestComponent(In1)|total_flow_hours',
+ 'TestComponent(In1)|on',
+ 'TestComponent(In1)|on_hours_total',
+ 'TestComponent|on',
+ 'TestComponent|on_hours_total',
+ },
+ msg='Incorrect variables',
+ )
+
+ assert_sets_equal(
+ set(comp.submodel.constraints),
+ {
+ 'TestComponent(In1)|total_flow_hours',
+ 'TestComponent(In1)|flow_rate|lb',
+ 'TestComponent(In1)|flow_rate|ub',
+ 'TestComponent(In1)|on_hours_total',
+ 'TestComponent|on',
+ 'TestComponent|on_hours_total',
+ },
+ msg='Incorrect constraints',
+ )
assert_var_equal(
- model['TestComponent(In1)|flow_rate'], model.add_variables(lower=0, upper=100, coords=(timesteps,))
+ model['TestComponent(In1)|flow_rate'], model.add_variables(lower=0, upper=100, coords=model.get_coords())
)
- assert_var_equal(model['TestComponent|on'], model.add_variables(binary=True, coords=(timesteps,)))
- assert_var_equal(model['TestComponent(In1)|on'], model.add_variables(binary=True, coords=(timesteps,)))
+ assert_var_equal(model['TestComponent|on'], model.add_variables(binary=True, coords=model.get_coords()))
+ assert_var_equal(model['TestComponent(In1)|on'], model.add_variables(binary=True, coords=model.get_coords()))
assert_conequal(
- model.constraints['TestComponent(In1)|on_con1'],
- model.variables['TestComponent(In1)|on'] * 0.1 * 100 <= model.variables['TestComponent(In1)|flow_rate'],
+ model.constraints['TestComponent(In1)|flow_rate|lb'],
+ model.variables['TestComponent(In1)|flow_rate'] >= model.variables['TestComponent(In1)|on'] * 0.1 * 100,
)
assert_conequal(
- model.constraints['TestComponent(In1)|on_con2'],
- model.variables['TestComponent(In1)|on'] * 100 >= model.variables['TestComponent(In1)|flow_rate'],
+ model.constraints['TestComponent(In1)|flow_rate|ub'],
+ model.variables['TestComponent(In1)|flow_rate'] <= model.variables['TestComponent(In1)|on'] * 100,
)
assert_conequal(
- model.constraints['TestComponent|on_con1'],
- model.variables['TestComponent|on'] * 0.1 * 100 <= model.variables['TestComponent(In1)|flow_rate'],
+ model.constraints['TestComponent|on'],
+ model.variables['TestComponent|on'] == model.variables['TestComponent(In1)|on'],
+ )
+
+ def test_previous_states_with_multiple_flows(self, basic_flow_system_linopy_coords, coords_config):
+ """Test that flow model constraints are correctly generated."""
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
+
+ ub_out2 = np.linspace(1, 1.5, 10).round(2)
+ inputs = [
+ fx.Flow(
+ 'In1',
+ 'Fernwärme',
+ relative_minimum=np.ones(10) * 0.1,
+ size=100,
+ previous_flow_rate=np.array([0, 0, 1e-6, 1e-5, 1e-4, 3, 4]),
+ ),
+ ]
+ outputs = [
+ fx.Flow('Out1', 'Gas', relative_minimum=np.ones(10) * 0.2, size=200, previous_flow_rate=[3, 4, 5]),
+ fx.Flow(
+ 'Out2',
+ 'Gas',
+ relative_minimum=np.ones(10) * 0.3,
+ relative_maximum=ub_out2,
+ size=300,
+ previous_flow_rate=20,
+ ),
+ ]
+ comp = flixopt.elements.Component(
+ 'TestComponent', inputs=inputs, outputs=outputs, on_off_parameters=fx.OnOffParameters()
+ )
+ flow_system.add_elements(comp)
+ model = create_linopy_model(flow_system)
+
+ assert_sets_equal(
+ set(comp.submodel.variables),
+ {
+ 'TestComponent(In1)|flow_rate',
+ 'TestComponent(In1)|total_flow_hours',
+ 'TestComponent(In1)|on',
+ 'TestComponent(In1)|on_hours_total',
+ 'TestComponent(Out1)|flow_rate',
+ 'TestComponent(Out1)|total_flow_hours',
+ 'TestComponent(Out1)|on',
+ 'TestComponent(Out1)|on_hours_total',
+ 'TestComponent(Out2)|flow_rate',
+ 'TestComponent(Out2)|total_flow_hours',
+ 'TestComponent(Out2)|on',
+ 'TestComponent(Out2)|on_hours_total',
+ 'TestComponent|on',
+ 'TestComponent|on_hours_total',
+ },
+ msg='Incorrect variables',
+ )
+
+ assert_sets_equal(
+ set(comp.submodel.constraints),
+ {
+ 'TestComponent(In1)|total_flow_hours',
+ 'TestComponent(In1)|flow_rate|lb',
+ 'TestComponent(In1)|flow_rate|ub',
+ 'TestComponent(In1)|on_hours_total',
+ 'TestComponent(Out1)|total_flow_hours',
+ 'TestComponent(Out1)|flow_rate|lb',
+ 'TestComponent(Out1)|flow_rate|ub',
+ 'TestComponent(Out1)|on_hours_total',
+ 'TestComponent(Out2)|total_flow_hours',
+ 'TestComponent(Out2)|flow_rate|lb',
+ 'TestComponent(Out2)|flow_rate|ub',
+ 'TestComponent(Out2)|on_hours_total',
+ 'TestComponent|on|lb',
+ 'TestComponent|on|ub',
+ 'TestComponent|on_hours_total',
+ },
+ msg='Incorrect constraints',
+ )
+
+ upper_bound_flow_rate = outputs[1].relative_maximum
+
+ assert upper_bound_flow_rate.dims == tuple(model.get_coords())
+
+ assert_var_equal(
+ model['TestComponent(Out2)|flow_rate'],
+ model.add_variables(lower=0, upper=300 * upper_bound_flow_rate, coords=model.get_coords()),
+ )
+ assert_var_equal(model['TestComponent|on'], model.add_variables(binary=True, coords=model.get_coords()))
+ assert_var_equal(model['TestComponent(Out2)|on'], model.add_variables(binary=True, coords=model.get_coords()))
+
+ assert_conequal(
+ model.constraints['TestComponent(Out2)|flow_rate|lb'],
+ model.variables['TestComponent(Out2)|flow_rate'] >= model.variables['TestComponent(Out2)|on'] * 0.3 * 300,
+ )
+ assert_conequal(
+ model.constraints['TestComponent(Out2)|flow_rate|ub'],
+ model.variables['TestComponent(Out2)|flow_rate']
+ <= model.variables['TestComponent(Out2)|on'] * 300 * upper_bound_flow_rate,
+ )
+
+ assert_conequal(
+ model.constraints['TestComponent|on|lb'],
+ model.variables['TestComponent|on']
+ >= (
+ model.variables['TestComponent(In1)|on']
+ + model.variables['TestComponent(Out1)|on']
+ + model.variables['TestComponent(Out2)|on']
+ )
+ / (3 + 1e-5),
+ )
+ assert_conequal(
+ model.constraints['TestComponent|on|ub'],
+ model.variables['TestComponent|on']
+ <= (
+ model.variables['TestComponent(In1)|on']
+ + model.variables['TestComponent(Out1)|on']
+ + model.variables['TestComponent(Out2)|on']
+ )
+ + 1e-5,
+ )
+
+ @pytest.mark.parametrize(
+ 'in1_previous_flow_rate, out1_previous_flow_rate, out2_previous_flow_rate, previous_on_hours',
+ [
+ (None, None, None, 0),
+ (np.array([0, 1e-6, 1e-4, 5]), None, None, 2),
+ (np.array([0, 5, 0, 5]), None, None, 1),
+ (np.array([0, 5, 0, 0]), 3, 0, 1),
+ (np.array([0, 0, 2, 0, 4, 5]), [3, 4, 5], None, 4),
+ ],
+ )
+ def test_previous_states_with_multiple_flows_parameterized(
+ self,
+ basic_flow_system_linopy_coords,
+ coords_config,
+ in1_previous_flow_rate,
+ out1_previous_flow_rate,
+ out2_previous_flow_rate,
+ previous_on_hours,
+ ):
+ """Test that flow model constraints are correctly generated with different previous flow rates and constraint factors."""
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
+
+ ub_out2 = np.linspace(1, 1.5, 10).round(2)
+ inputs = [
+ fx.Flow(
+ 'In1',
+ 'Fernwärme',
+ relative_minimum=np.ones(10) * 0.1,
+ size=100,
+ previous_flow_rate=in1_previous_flow_rate,
+ on_off_parameters=fx.OnOffParameters(consecutive_on_hours_min=3),
+ ),
+ ]
+ outputs = [
+ fx.Flow(
+ 'Out1', 'Gas', relative_minimum=np.ones(10) * 0.2, size=200, previous_flow_rate=out1_previous_flow_rate
+ ),
+ fx.Flow(
+ 'Out2',
+ 'Gas',
+ relative_minimum=np.ones(10) * 0.3,
+ relative_maximum=ub_out2,
+ size=300,
+ previous_flow_rate=out2_previous_flow_rate,
+ ),
+ ]
+ comp = flixopt.elements.Component(
+ 'TestComponent',
+ inputs=inputs,
+ outputs=outputs,
+ on_off_parameters=fx.OnOffParameters(consecutive_on_hours_min=3),
)
+ flow_system.add_elements(comp)
+ create_linopy_model(flow_system)
+
assert_conequal(
- model.constraints['TestComponent|on_con2'],
- model.variables['TestComponent|on'] * 100 >= model.variables['TestComponent(In1)|flow_rate'],
+ comp.submodel.constraints['TestComponent|consecutive_on_hours|initial'],
+ comp.submodel.variables['TestComponent|consecutive_on_hours'].isel(time=0)
+ == comp.submodel.variables['TestComponent|on'].isel(time=0) * (previous_on_hours + 1),
+ )
+
+
+class TestTransmissionModel:
+ def test_transmission_basic(self, basic_flow_system, highs_solver):
+ """Test basic transmission functionality"""
+ flow_system = basic_flow_system
+ flow_system.add_elements(fx.Bus('Wärme lokal'))
+
+ boiler = fx.linear_converters.Boiler(
+ 'Boiler', eta=0.5, Q_th=fx.Flow('Q_th', bus='Wärme lokal'), Q_fu=fx.Flow('Q_fu', bus='Gas')
+ )
+
+ transmission = fx.Transmission(
+ 'Rohr',
+ relative_losses=0.2,
+ absolute_losses=20,
+ in1=fx.Flow(
+ 'Rohr1', 'Wärme lokal', size=fx.InvestParameters(effects_of_investment_per_size=5, maximum_size=1e6)
+ ),
+ out1=fx.Flow('Rohr2', 'Fernwärme', size=1000),
+ )
+
+ flow_system.add_elements(transmission, boiler)
+
+ _ = create_calculation_and_solve(flow_system, highs_solver, 'test_transmission_basic')
+
+ # Assertions
+ assert_almost_equal_numeric(
+ transmission.in1.submodel.on_off.on.solution.values,
+ np.array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1]),
+ 'On does not work properly',
+ )
+
+ assert_almost_equal_numeric(
+ transmission.in1.submodel.flow_rate.solution.values * 0.8 - 20,
+ transmission.out1.submodel.flow_rate.solution.values,
+ 'Losses are not computed correctly',
+ )
+
+ def test_transmission_balanced(self, basic_flow_system, highs_solver):
+ """Test advanced transmission functionality"""
+ flow_system = basic_flow_system
+ flow_system.add_elements(fx.Bus('Wärme lokal'))
+
+ boiler = fx.linear_converters.Boiler(
+ 'Boiler_Standard',
+ eta=0.9,
+ Q_th=fx.Flow('Q_th', bus='Fernwärme', relative_maximum=np.array([0, 0, 0, 1, 1, 1, 1, 1, 1, 1])),
+ Q_fu=fx.Flow('Q_fu', bus='Gas'),
+ )
+
+ boiler2 = fx.linear_converters.Boiler(
+ 'Boiler_backup', eta=0.4, Q_th=fx.Flow('Q_th', bus='Wärme lokal'), Q_fu=fx.Flow('Q_fu', bus='Gas')
+ )
+
+ last2 = fx.Sink(
+ 'Wärmelast2',
+ inputs=[
+ fx.Flow(
+ 'Q_th_Last',
+ bus='Wärme lokal',
+ size=1,
+ fixed_relative_profile=flow_system.components['Wärmelast'].inputs[0].fixed_relative_profile
+ * np.array([0, 0, 0, 0, 0, 1, 1, 1, 1, 1]),
+ )
+ ],
+ )
+
+ transmission = fx.Transmission(
+ 'Rohr',
+ relative_losses=0.2,
+ absolute_losses=20,
+ in1=fx.Flow(
+ 'Rohr1a',
+ bus='Wärme lokal',
+ size=fx.InvestParameters(effects_of_investment_per_size=5, maximum_size=1000),
+ ),
+ out1=fx.Flow('Rohr1b', 'Fernwärme', size=1000),
+ in2=fx.Flow('Rohr2a', 'Fernwärme', size=fx.InvestParameters()),
+ out2=fx.Flow('Rohr2b', bus='Wärme lokal', size=1000),
+ balanced=True,
+ )
+
+ flow_system.add_elements(transmission, boiler, boiler2, last2)
+
+ calculation = create_calculation_and_solve(flow_system, highs_solver, 'test_transmission_advanced')
+
+ # Assertions
+ assert_almost_equal_numeric(
+ transmission.in1.submodel.on_off.on.solution.values,
+ np.array([1, 1, 1, 0, 0, 0, 0, 0, 0, 0]),
+ 'On does not work properly',
+ )
+
+ assert_almost_equal_numeric(
+ calculation.results.model.variables['Rohr(Rohr1b)|flow_rate'].solution.values,
+ transmission.out1.submodel.flow_rate.solution.values,
+ 'Flow rate of Rohr__Rohr1b is not correct',
+ )
+
+ assert_almost_equal_numeric(
+ transmission.in1.submodel.flow_rate.solution.values * 0.8
+ - np.array([20 if val > 0.1 else 0 for val in transmission.in1.submodel.flow_rate.solution.values]),
+ transmission.out1.submodel.flow_rate.solution.values,
+ 'Losses are not computed correctly',
+ )
+
+ assert_almost_equal_numeric(
+ transmission.in1.submodel._investment.size.solution.item(),
+ transmission.in2.submodel._investment.size.solution.item(),
+ 'The Investments are not equated correctly',
+ )
+
+ def test_transmission_unbalanced(self, basic_flow_system, highs_solver):
+ """Test advanced transmission functionality"""
+ flow_system = basic_flow_system
+ flow_system.add_elements(fx.Bus('Wärme lokal'))
+
+ boiler = fx.linear_converters.Boiler(
+ 'Boiler_Standard',
+ eta=0.9,
+ Q_th=fx.Flow('Q_th', bus='Fernwärme', relative_maximum=np.array([0, 0, 0, 1, 1, 1, 1, 1, 1, 1])),
+ Q_fu=fx.Flow('Q_fu', bus='Gas'),
+ )
+
+ boiler2 = fx.linear_converters.Boiler(
+ 'Boiler_backup', eta=0.4, Q_th=fx.Flow('Q_th', bus='Wärme lokal'), Q_fu=fx.Flow('Q_fu', bus='Gas')
+ )
+
+ last2 = fx.Sink(
+ 'Wärmelast2',
+ inputs=[
+ fx.Flow(
+ 'Q_th_Last',
+ bus='Wärme lokal',
+ size=1,
+ fixed_relative_profile=flow_system.components['Wärmelast'].inputs[0].fixed_relative_profile
+ * np.array([0, 0, 0, 0, 0, 1, 1, 1, 1, 1]),
+ )
+ ],
+ )
+
+ transmission = fx.Transmission(
+ 'Rohr',
+ relative_losses=0.2,
+ absolute_losses=20,
+ in1=fx.Flow(
+ 'Rohr1a',
+ bus='Wärme lokal',
+ size=fx.InvestParameters(effects_of_investment_per_size=50, maximum_size=1000),
+ ),
+ out1=fx.Flow('Rohr1b', 'Fernwärme', size=1000),
+ in2=fx.Flow(
+ 'Rohr2a',
+ 'Fernwärme',
+ size=fx.InvestParameters(effects_of_investment_per_size=100, minimum_size=10, mandatory=True),
+ ),
+ out2=fx.Flow('Rohr2b', bus='Wärme lokal', size=1000),
+ balanced=False,
+ )
+
+ flow_system.add_elements(transmission, boiler, boiler2, last2)
+
+ calculation = create_calculation_and_solve(flow_system, highs_solver, 'test_transmission_advanced')
+
+ # Assertions
+ assert_almost_equal_numeric(
+ transmission.in1.submodel.on_off.on.solution.values,
+ np.array([1, 1, 1, 0, 0, 0, 0, 0, 0, 0]),
+ 'On does not work properly',
+ )
+
+ assert_almost_equal_numeric(
+ calculation.results.model.variables['Rohr(Rohr1b)|flow_rate'].solution.values,
+ transmission.out1.submodel.flow_rate.solution.values,
+ 'Flow rate of Rohr__Rohr1b is not correct',
+ )
+
+ assert_almost_equal_numeric(
+ transmission.in1.submodel.flow_rate.solution.values * 0.8
+ - np.array([20 if val > 0.1 else 0 for val in transmission.in1.submodel.flow_rate.solution.values]),
+ transmission.out1.submodel.flow_rate.solution.values,
+ 'Losses are not computed correctly',
+ )
+
+ assert transmission.in1.submodel._investment.size.solution.item() > 11
+
+ assert_almost_equal_numeric(
+ transmission.in2.submodel._investment.size.solution.item(),
+ 10,
+ 'Sizing does not work properly',
)
diff --git a/tests/test_config.py b/tests/test_config.py
index c486d22c6..60ed80555 100644
--- a/tests/test_config.py
+++ b/tests/test_config.py
@@ -25,9 +25,9 @@ def teardown_method(self):
def test_config_defaults(self):
"""Test that CONFIG has correct default values."""
assert CONFIG.Logging.level == 'INFO'
- assert CONFIG.Logging.file == 'flixopt.log'
+ assert CONFIG.Logging.file is None
assert CONFIG.Logging.rich is False
- assert CONFIG.Logging.console is True
+ assert CONFIG.Logging.console is False
assert CONFIG.Modeling.big == 10_000_000
assert CONFIG.Modeling.epsilon == 1e-5
assert CONFIG.Modeling.big_binary_bound == 100_000
@@ -39,9 +39,9 @@ def test_module_initialization(self):
CONFIG.apply()
logger = logging.getLogger('flixopt')
# Should have at least one handler (file handler by default)
- assert len(logger.handlers) >= 1
+ assert len(logger.handlers) == 1
# Should have a file handler with default settings
- assert any(isinstance(h, logging.handlers.RotatingFileHandler) for h in logger.handlers)
+ assert isinstance(logger.handlers[0], logging.NullHandler)
def test_config_apply_console(self):
"""Test applying config with console logging enabled."""
@@ -102,7 +102,7 @@ def test_config_to_dict(self):
assert config_dict['config_name'] == 'flixopt'
assert config_dict['logging']['level'] == 'DEBUG'
assert config_dict['logging']['console'] is True
- assert config_dict['logging']['file'] == 'flixopt.log'
+ assert config_dict['logging']['file'] is None
assert config_dict['logging']['rich'] is False
assert 'modeling' in config_dict
assert config_dict['modeling']['big'] == 10_000_000
@@ -433,9 +433,9 @@ def test_config_reset(self):
# Verify all values are back to defaults
assert CONFIG.Logging.level == 'INFO'
- assert CONFIG.Logging.console is True
+ assert CONFIG.Logging.console is False
assert CONFIG.Logging.rich is False
- assert CONFIG.Logging.file == 'flixopt.log'
+ assert CONFIG.Logging.file is None
assert CONFIG.Modeling.big == 10_000_000
assert CONFIG.Modeling.epsilon == 1e-5
assert CONFIG.Modeling.big_binary_bound == 100_000
@@ -444,7 +444,7 @@ def test_config_reset(self):
# Verify logging was also reset
logger = logging.getLogger('flixopt')
assert logger.level == logging.INFO
- assert any(isinstance(h, logging.handlers.RotatingFileHandler) for h in logger.handlers)
+ assert isinstance(logger.handlers[0], logging.NullHandler)
def test_reset_matches_class_defaults(self):
"""Test that reset() values match the _DEFAULTS constants.
diff --git a/tests/test_cycle_detection.py b/tests/test_cycle_detection.py
new file mode 100644
index 000000000..753a9a3e5
--- /dev/null
+++ b/tests/test_cycle_detection.py
@@ -0,0 +1,200 @@
+import pytest
+
+from flixopt.effects import detect_cycles
+
+
+def test_empty_graph():
+ """Test that an empty graph has no cycles."""
+ assert detect_cycles({}) == []
+
+
+def test_single_node():
+ """Test that a graph with a single node and no edges has no cycles."""
+ assert detect_cycles({'A': []}) == []
+
+
+def test_self_loop():
+ """Test that a graph with a self-loop has a cycle."""
+ cycles = detect_cycles({'A': ['A']})
+ assert len(cycles) == 1
+ assert cycles[0] == ['A', 'A']
+
+
+def test_simple_cycle():
+ """Test that a simple cycle is detected."""
+ graph = {'A': ['B'], 'B': ['C'], 'C': ['A']}
+ cycles = detect_cycles(graph)
+ assert len(cycles) == 1
+ assert cycles[0] == ['A', 'B', 'C', 'A'] or cycles[0] == ['B', 'C', 'A', 'B'] or cycles[0] == ['C', 'A', 'B', 'C']
+
+
+def test_no_cycles():
+ """Test that a directed acyclic graph has no cycles."""
+ graph = {'A': ['B', 'C'], 'B': ['D', 'E'], 'C': ['F'], 'D': [], 'E': [], 'F': []}
+ assert detect_cycles(graph) == []
+
+
+def test_multiple_cycles():
+ """Test that a graph with multiple cycles is detected."""
+ graph = {'A': ['B', 'D'], 'B': ['C'], 'C': ['A'], 'D': ['E'], 'E': ['D']}
+ cycles = detect_cycles(graph)
+ assert len(cycles) == 2
+
+ # Check that both cycles are detected (order might vary)
+ cycle_strings = [','.join(cycle) for cycle in cycles]
+ assert (
+ any('A,B,C,A' in s for s in cycle_strings)
+ or any('B,C,A,B' in s for s in cycle_strings)
+ or any('C,A,B,C' in s for s in cycle_strings)
+ )
+ assert any('D,E,D' in s for s in cycle_strings) or any('E,D,E' in s for s in cycle_strings)
+
+
+def test_hidden_cycle():
+ """Test that a cycle hidden deep in the graph is detected."""
+ graph = {
+ 'A': ['B', 'C'],
+ 'B': ['D'],
+ 'C': ['E'],
+ 'D': ['F'],
+ 'E': ['G'],
+ 'F': ['H'],
+ 'G': ['I'],
+ 'H': ['J'],
+ 'I': ['K'],
+ 'J': ['L'],
+ 'K': ['M'],
+ 'L': ['N'],
+ 'M': ['N'],
+ 'N': ['O'],
+ 'O': ['P'],
+ 'P': ['Q'],
+ 'Q': ['O'], # Hidden cycle O->P->Q->O
+ }
+ cycles = detect_cycles(graph)
+ assert len(cycles) == 1
+
+ # Check that the O-P-Q cycle is detected
+ cycle = cycles[0]
+ assert 'O' in cycle and 'P' in cycle and 'Q' in cycle
+
+ # Check that they appear in the correct order
+ o_index = cycle.index('O')
+ p_index = cycle.index('P')
+ q_index = cycle.index('Q')
+
+ # Check the cycle order is correct (allowing for different starting points)
+ cycle_len = len(cycle)
+ assert (
+ (p_index == (o_index + 1) % cycle_len and q_index == (p_index + 1) % cycle_len)
+ or (q_index == (o_index + 1) % cycle_len and p_index == (q_index + 1) % cycle_len)
+ or (o_index == (p_index + 1) % cycle_len and q_index == (o_index + 1) % cycle_len)
+ )
+
+
+def test_disconnected_graph():
+ """Test with a disconnected graph."""
+ graph = {'A': ['B'], 'B': ['C'], 'C': [], 'D': ['E'], 'E': ['F'], 'F': []}
+ assert detect_cycles(graph) == []
+
+
+def test_disconnected_graph_with_cycle():
+ """Test with a disconnected graph containing a cycle in one component."""
+ graph = {
+ 'A': ['B'],
+ 'B': ['C'],
+ 'C': [],
+ 'D': ['E'],
+ 'E': ['F'],
+ 'F': ['D'], # Cycle in D->E->F->D
+ }
+ cycles = detect_cycles(graph)
+ assert len(cycles) == 1
+
+ # Check that the D-E-F cycle is detected
+ cycle = cycles[0]
+ assert 'D' in cycle and 'E' in cycle and 'F' in cycle
+
+ # Check if they appear in the correct order
+ d_index = cycle.index('D')
+ e_index = cycle.index('E')
+ f_index = cycle.index('F')
+
+ # Check the cycle order is correct (allowing for different starting points)
+ cycle_len = len(cycle)
+ assert (
+ (e_index == (d_index + 1) % cycle_len and f_index == (e_index + 1) % cycle_len)
+ or (f_index == (d_index + 1) % cycle_len and e_index == (f_index + 1) % cycle_len)
+ or (d_index == (e_index + 1) % cycle_len and f_index == (d_index + 1) % cycle_len)
+ )
+
+
+def test_complex_dag():
+ """Test with a complex directed acyclic graph."""
+ graph = {
+ 'A': ['B', 'C', 'D'],
+ 'B': ['E', 'F'],
+ 'C': ['E', 'G'],
+ 'D': ['G', 'H'],
+ 'E': ['I', 'J'],
+ 'F': ['J', 'K'],
+ 'G': ['K', 'L'],
+ 'H': ['L', 'M'],
+ 'I': ['N'],
+ 'J': ['N', 'O'],
+ 'K': ['O', 'P'],
+ 'L': ['P', 'Q'],
+ 'M': ['Q'],
+ 'N': ['R'],
+ 'O': ['R', 'S'],
+ 'P': ['S'],
+ 'Q': ['S'],
+ 'R': [],
+ 'S': [],
+ }
+ assert detect_cycles(graph) == []
+
+
+def test_missing_node_in_connections():
+ """Test behavior when a node referenced in edges doesn't have its own key."""
+ graph = {
+ 'A': ['B', 'C'],
+ 'B': ['D'],
+ # C and D don't have their own entries
+ }
+ assert detect_cycles(graph) == []
+
+
+def test_non_string_keys():
+ """Test with non-string keys to ensure the algorithm is generic."""
+ graph = {1: [2, 3], 2: [4], 3: [4], 4: []}
+ assert detect_cycles(graph) == []
+
+ graph_with_cycle = {1: [2], 2: [3], 3: [1]}
+ cycles = detect_cycles(graph_with_cycle)
+ assert len(cycles) == 1
+ assert cycles[0] == [1, 2, 3, 1] or cycles[0] == [2, 3, 1, 2] or cycles[0] == [3, 1, 2, 3]
+
+
+def test_complex_network_with_many_nodes():
+ """Test with a large network to check performance and correctness."""
+ graph = {}
+ # Create a large DAG
+ for i in range(100):
+ # Connect each node to the next few nodes
+ graph[i] = [j for j in range(i + 1, min(i + 5, 100))]
+
+ # No cycles in this arrangement
+ assert detect_cycles(graph) == []
+
+ # Add a single back edge to create a cycle
+ graph[99] = [0] # This creates a cycle
+ cycles = detect_cycles(graph)
+ assert len(cycles) >= 1
+ # The cycle might include many nodes, but must contain both 0 and 99
+ any_cycle_has_both = any(0 in cycle and 99 in cycle for cycle in cycles)
+ assert any_cycle_has_both
+
+
+if __name__ == '__main__':
+ pytest.main(['-v'])
diff --git a/tests/test_dataconverter.py b/tests/test_dataconverter.py
index 49f1438e7..0f12a1af3 100644
--- a/tests/test_dataconverter.py
+++ b/tests/test_dataconverter.py
@@ -3,110 +3,1259 @@
import pytest
import xarray as xr
-from flixopt.core import ConversionError, DataConverter # Adjust this import to match your project structure
+from flixopt.core import ( # Adjust this import to match your project structure
+ ConversionError,
+ DataConverter,
+ TimeSeriesData,
+)
@pytest.fixture
-def sample_time_index(request):
- index = pd.date_range('2024-01-01', periods=5, freq='D', name='time')
- return index
-
-
-def test_scalar_conversion(sample_time_index):
- # Test scalar conversion
- result = DataConverter.as_dataarray(42, sample_time_index)
- assert isinstance(result, xr.DataArray)
- assert result.shape == (len(sample_time_index),)
- assert result.dims == ('time',)
- assert np.all(result.values == 42)
-
-
-def test_series_conversion(sample_time_index):
- series = pd.Series([1, 2, 3, 4, 5], index=sample_time_index)
-
- # Test Series conversion
- result = DataConverter.as_dataarray(series, sample_time_index)
- assert isinstance(result, xr.DataArray)
- assert result.shape == (5,)
- assert result.dims == ('time',)
- assert np.array_equal(result.values, series.values)
-
-
-def test_dataframe_conversion(sample_time_index):
- # Create a single-column DataFrame
- df = pd.DataFrame({'A': [1, 2, 3, 4, 5]}, index=sample_time_index)
-
- # Test DataFrame conversion
- result = DataConverter.as_dataarray(df, sample_time_index)
- assert isinstance(result, xr.DataArray)
- assert result.shape == (5,)
- assert result.dims == ('time',)
- assert np.array_equal(result.values.flatten(), df['A'].values)
-
-
-def test_ndarray_conversion(sample_time_index):
- # Test 1D array conversion
- arr_1d = np.array([1, 2, 3, 4, 5])
- result = DataConverter.as_dataarray(arr_1d, sample_time_index)
- assert result.shape == (5,)
- assert result.dims == ('time',)
- assert np.array_equal(result.values, arr_1d)
-
-
-def test_dataarray_conversion(sample_time_index):
- # Create a DataArray
- original = xr.DataArray(data=np.array([1, 2, 3, 4, 5]), coords={'time': sample_time_index}, dims=['time'])
-
- # Test DataArray conversion
- result = DataConverter.as_dataarray(original, sample_time_index)
- assert result.shape == (5,)
- assert result.dims == ('time',)
- assert np.array_equal(result.values, original.values)
-
- # Ensure it's a copy
- result[0] = 999
- assert original[0].item() == 1 # Original should be unchanged
-
-
-def test_invalid_inputs(sample_time_index):
- # Test invalid input type
- with pytest.raises(ConversionError):
- DataConverter.as_dataarray('invalid_string', sample_time_index)
-
- # Test mismatched Series index
- mismatched_series = pd.Series([1, 2, 3, 4, 5, 6], index=pd.date_range('2025-01-01', periods=6, freq='D'))
- with pytest.raises(ConversionError):
- DataConverter.as_dataarray(mismatched_series, sample_time_index)
-
- # Test DataFrame with multiple columns
- df_multi_col = pd.DataFrame({'A': [1, 2, 3, 4, 5], 'B': [6, 7, 8, 9, 10]}, index=sample_time_index)
- with pytest.raises(ConversionError):
- DataConverter.as_dataarray(df_multi_col, sample_time_index)
-
- # Test mismatched array shape
- with pytest.raises(ConversionError):
- DataConverter.as_dataarray(np.array([1, 2, 3]), sample_time_index) # Wrong length
-
- # Test multi-dimensional array
- with pytest.raises(ConversionError):
- DataConverter.as_dataarray(np.array([[1, 2], [3, 4]]), sample_time_index) # 2D array not allowed
-
-
-def test_time_index_validation():
- # Test with unnamed index
- unnamed_index = pd.date_range('2024-01-01', periods=5, freq='D')
- with pytest.raises(ConversionError):
- DataConverter.as_dataarray(42, unnamed_index)
-
- # Test with empty index
- empty_index = pd.DatetimeIndex([], name='time')
- with pytest.raises(ValueError):
- DataConverter.as_dataarray(42, empty_index)
-
- # Test with non-DatetimeIndex
- wrong_type_index = pd.Index([1, 2, 3, 4, 5], name='time')
- with pytest.raises(ValueError):
- DataConverter.as_dataarray(42, wrong_type_index)
+def time_coords():
+ return pd.date_range('2024-01-01', periods=5, freq='D', name='time')
+
+
+@pytest.fixture
+def scenario_coords():
+ return pd.Index(['baseline', 'high', 'low'], name='scenario')
+
+
+@pytest.fixture
+def region_coords():
+ return pd.Index(['north', 'south', 'east'], name='region')
+
+
+@pytest.fixture
+def standard_coords():
+ """Standard coordinates with unique lengths for easy testing."""
+ return {
+ 'time': pd.date_range('2024-01-01', periods=5, freq='D', name='time'), # length 5
+ 'scenario': pd.Index(['A', 'B', 'C'], name='scenario'), # length 3
+ 'region': pd.Index(['north', 'south'], name='region'), # length 2
+ }
+
+
+class TestScalarConversion:
+ """Test scalar data conversions with different coordinate configurations."""
+
+ def test_scalar_no_coords(self):
+ """Scalar without coordinates should create 0D DataArray."""
+ result = DataConverter.to_dataarray(42)
+ assert result.shape == ()
+ assert result.dims == ()
+ assert result.item() == 42
+
+ def test_scalar_single_coord(self, time_coords):
+ """Scalar with single coordinate should broadcast."""
+ result = DataConverter.to_dataarray(42, coords={'time': time_coords})
+ assert result.shape == (5,)
+ assert result.dims == ('time',)
+ assert np.all(result.values == 42)
+
+ def test_scalar_multiple_coords(self, time_coords, scenario_coords):
+ """Scalar with multiple coordinates should broadcast to all."""
+ result = DataConverter.to_dataarray(42, coords={'time': time_coords, 'scenario': scenario_coords})
+ assert result.shape == (5, 3)
+ assert result.dims == ('time', 'scenario')
+ assert np.all(result.values == 42)
+
+ def test_numpy_scalars(self, time_coords):
+ """Test numpy scalar types."""
+ for scalar in [np.int32(42), np.int64(42), np.float32(42.5), np.float64(42.5)]:
+ result = DataConverter.to_dataarray(scalar, coords={'time': time_coords})
+ assert result.shape == (5,)
+ assert np.all(result.values == scalar.item())
+
+ def test_scalar_many_dimensions(self, standard_coords):
+ """Scalar should broadcast to any number of dimensions."""
+ coords = {**standard_coords, 'technology': pd.Index(['solar', 'wind'], name='technology')}
+
+ result = DataConverter.to_dataarray(42, coords=coords)
+ assert result.shape == (5, 3, 2, 2)
+ assert result.dims == ('time', 'scenario', 'region', 'technology')
+ assert np.all(result.values == 42)
+
+
+class TestOneDimensionalArrayConversion:
+ """Test 1D numpy array and pandas Series conversions."""
+
+ def test_1d_array_no_coords(self):
+ """1D array without coords should fail unless single element."""
+ # Multi-element fails
+ with pytest.raises(ConversionError):
+ DataConverter.to_dataarray(np.array([1, 2, 3]))
+
+ # Single element succeeds
+ result = DataConverter.to_dataarray(np.array([42]))
+ assert result.shape == ()
+ assert result.item() == 42
+
+ def test_1d_array_matching_coord(self, time_coords):
+ """1D array matching coordinate length should work."""
+ arr = np.array([10, 20, 30, 40, 50])
+ result = DataConverter.to_dataarray(arr, coords={'time': time_coords})
+ assert result.shape == (5,)
+ assert result.dims == ('time',)
+ assert np.array_equal(result.values, arr)
+
+ def test_1d_array_mismatched_coord(self, time_coords):
+ """1D array not matching coordinate length should fail."""
+ arr = np.array([10, 20, 30]) # Length 3, time_coords has length 5
+ with pytest.raises(ConversionError):
+ DataConverter.to_dataarray(arr, coords={'time': time_coords})
+
+ def test_1d_array_broadcast_to_multiple_coords(self, time_coords, scenario_coords):
+ """1D array should broadcast to matching dimension."""
+ # Array matching time dimension
+ time_arr = np.array([10, 20, 30, 40, 50])
+ result = DataConverter.to_dataarray(time_arr, coords={'time': time_coords, 'scenario': scenario_coords})
+ assert result.shape == (5, 3)
+ assert result.dims == ('time', 'scenario')
+
+ # Each scenario should have the same time values
+ for scenario in scenario_coords:
+ assert np.array_equal(result.sel(scenario=scenario).values, time_arr)
+
+ # Array matching scenario dimension
+ scenario_arr = np.array([100, 200, 300])
+ result = DataConverter.to_dataarray(scenario_arr, coords={'time': time_coords, 'scenario': scenario_coords})
+ assert result.shape == (5, 3)
+ assert result.dims == ('time', 'scenario')
+
+ # Each time should have the same scenario values
+ for time in time_coords:
+ assert np.array_equal(result.sel(time=time).values, scenario_arr)
+
+ def test_1d_array_ambiguous_length(self):
+ """Array length matching multiple dimensions should fail."""
+ # Both dimensions have length 3
+ coords_3x3 = {
+ 'time': pd.date_range('2024-01-01', periods=3, freq='D', name='time'),
+ 'scenario': pd.Index(['A', 'B', 'C'], name='scenario'),
+ }
+ arr = np.array([1, 2, 3])
+
+ with pytest.raises(ConversionError, match='matches multiple dimension'):
+ DataConverter.to_dataarray(arr, coords=coords_3x3)
+
+ def test_1d_array_broadcast_to_many_dimensions(self, standard_coords):
+ """1D array should broadcast to many dimensions."""
+ # Array matching time dimension
+ time_arr = np.array([10, 20, 30, 40, 50])
+ result = DataConverter.to_dataarray(time_arr, coords=standard_coords)
+
+ assert result.shape == (5, 3, 2)
+ assert result.dims == ('time', 'scenario', 'region')
+
+ # Check broadcasting - all scenarios and regions should have same time values
+ for scenario in standard_coords['scenario']:
+ for region in standard_coords['region']:
+ assert np.array_equal(result.sel(scenario=scenario, region=region).values, time_arr)
+
+
+class TestSeriesConversion:
+ """Test pandas Series conversions."""
+
+ def test_series_no_coords(self):
+ """Series without coords should fail unless single element."""
+ # Multi-element fails
+ series = pd.Series([1, 2, 3])
+ with pytest.raises(ConversionError):
+ DataConverter.to_dataarray(series)
+
+ # Single element succeeds
+ single_series = pd.Series([42])
+ result = DataConverter.to_dataarray(single_series)
+ assert result.shape == ()
+ assert result.item() == 42
+
+ def test_series_matching_index(self, time_coords, scenario_coords):
+ """Series with matching index should work."""
+ # Time-indexed series
+ time_series = pd.Series([10, 20, 30, 40, 50], index=time_coords)
+ result = DataConverter.to_dataarray(time_series, coords={'time': time_coords})
+ assert result.shape == (5,)
+ assert result.dims == ('time',)
+ assert np.array_equal(result.values, time_series.values)
+
+ # Scenario-indexed series
+ scenario_series = pd.Series([100, 200, 300], index=scenario_coords)
+ result = DataConverter.to_dataarray(scenario_series, coords={'scenario': scenario_coords})
+ assert result.shape == (3,)
+ assert result.dims == ('scenario',)
+ assert np.array_equal(result.values, scenario_series.values)
+
+ def test_series_mismatched_index(self, time_coords):
+ """Series with non-matching index should fail."""
+ wrong_times = pd.date_range('2025-01-01', periods=5, freq='D', name='time')
+ series = pd.Series([10, 20, 30, 40, 50], index=wrong_times)
+
+ with pytest.raises(ConversionError):
+ DataConverter.to_dataarray(series, coords={'time': time_coords})
+
+ def test_series_broadcast_to_multiple_coords(self, time_coords, scenario_coords):
+ """Series should broadcast to non-matching dimensions."""
+ # Time series broadcast to scenarios
+ time_series = pd.Series([10, 20, 30, 40, 50], index=time_coords)
+ result = DataConverter.to_dataarray(time_series, coords={'time': time_coords, 'scenario': scenario_coords})
+ assert result.shape == (5, 3)
+
+ for scenario in scenario_coords:
+ assert np.array_equal(result.sel(scenario=scenario).values, time_series.values)
+
+ def test_series_wrong_dimension(self, time_coords, region_coords):
+ """Series indexed by dimension not in coords should fail."""
+ wrong_series = pd.Series([1, 2, 3], index=region_coords)
+
+ with pytest.raises(ConversionError):
+ DataConverter.to_dataarray(wrong_series, coords={'time': time_coords})
+
+ def test_series_broadcast_to_many_dimensions(self, standard_coords):
+ """Series should broadcast to many dimensions."""
+ time_series = pd.Series([100, 200, 300, 400, 500], index=standard_coords['time'])
+ result = DataConverter.to_dataarray(time_series, coords=standard_coords)
+
+ assert result.shape == (5, 3, 2)
+ assert result.dims == ('time', 'scenario', 'region')
+
+ # Check that all non-time dimensions have the same time series values
+ for scenario in standard_coords['scenario']:
+ for region in standard_coords['region']:
+ assert np.array_equal(result.sel(scenario=scenario, region=region).values, time_series.values)
+
+
+class TestDataFrameConversion:
+ """Test pandas DataFrame conversions."""
+
+ def test_single_column_dataframe(self, time_coords):
+ """Single-column DataFrame should work like Series."""
+ df = pd.DataFrame({'value': [10, 20, 30, 40, 50]}, index=time_coords)
+ result = DataConverter.to_dataarray(df, coords={'time': time_coords})
+
+ assert result.shape == (5,)
+ assert result.dims == ('time',)
+ assert np.array_equal(result.values, df['value'].values)
+
+ def test_multi_column_dataframe_accepted(self, time_coords, scenario_coords):
+ """Multi-column DataFrame should now be accepted and converted via numpy array path."""
+ df = pd.DataFrame(
+ {'value1': [10, 20, 30, 40, 50], 'value2': [15, 25, 35, 45, 55], 'value3': [12, 22, 32, 42, 52]},
+ index=time_coords,
+ )
+
+ # Should work by converting to numpy array (5x3) and matching to time x scenario
+ result = DataConverter.to_dataarray(df, coords={'time': time_coords, 'scenario': scenario_coords})
+
+ assert result.shape == (5, 3)
+ assert result.dims == ('time', 'scenario')
+ assert np.array_equal(result.values, df.to_numpy())
+
+ def test_empty_dataframe_rejected(self, time_coords):
+ """Empty DataFrame should be rejected."""
+ df = pd.DataFrame(index=time_coords) # No columns
+
+ with pytest.raises(ConversionError, match='DataFrame must have at least one column'):
+ DataConverter.to_dataarray(df, coords={'time': time_coords})
+
+ def test_dataframe_broadcast(self, time_coords, scenario_coords):
+ """Single-column DataFrame should broadcast like Series."""
+ df = pd.DataFrame({'power': [10, 20, 30, 40, 50]}, index=time_coords)
+ result = DataConverter.to_dataarray(df, coords={'time': time_coords, 'scenario': scenario_coords})
+
+ assert result.shape == (5, 3)
+ for scenario in scenario_coords:
+ assert np.array_equal(result.sel(scenario=scenario).values, df['power'].values)
+
+
+class TestMultiDimensionalArrayConversion:
+ """Test multi-dimensional numpy array conversions."""
+
+ def test_2d_array_unique_dimensions(self, standard_coords):
+ """2D array with unique dimension lengths should work."""
+ # 5x3 array should map to time x scenario
+ data_2d = np.random.rand(5, 3)
+ result = DataConverter.to_dataarray(
+ data_2d, coords={'time': standard_coords['time'], 'scenario': standard_coords['scenario']}
+ )
+
+ assert result.shape == (5, 3)
+ assert result.dims == ('time', 'scenario')
+ assert np.array_equal(result.values, data_2d)
+
+ # 3x5 array should map to scenario x time
+ data_2d_flipped = np.random.rand(3, 5)
+ result_flipped = DataConverter.to_dataarray(
+ data_2d_flipped, coords={'time': standard_coords['time'], 'scenario': standard_coords['scenario']}
+ )
+
+ assert result_flipped.shape == (5, 3)
+ assert result_flipped.dims == ('time', 'scenario')
+ assert np.array_equal(result_flipped.values.transpose(), data_2d_flipped)
+
+ def test_2d_array_broadcast_to_3d(self, standard_coords):
+ """2D array should broadcast to additional dimensions when using partial matching."""
+ # With improved integration, 2D array (5x3) should match time×scenario and broadcast to region
+ data_2d = np.random.rand(5, 3)
+ result = DataConverter.to_dataarray(data_2d, coords=standard_coords)
+
+ assert result.shape == (5, 3, 2)
+ assert result.dims == ('time', 'scenario', 'region')
+
+ # Check that all regions have the same time x scenario data
+ for region in standard_coords['region']:
+ assert np.array_equal(result.sel(region=region).values, data_2d)
+
+ def test_3d_array_unique_dimensions(self, standard_coords):
+ """3D array with unique dimension lengths should work."""
+ # 5x3x2 array should map to time x scenario x region
+ data_3d = np.random.rand(5, 3, 2)
+ result = DataConverter.to_dataarray(data_3d, coords=standard_coords)
+
+ assert result.shape == (5, 3, 2)
+ assert result.dims == ('time', 'scenario', 'region')
+ assert np.array_equal(result.values, data_3d)
+
+ def test_3d_array_different_permutation(self, standard_coords):
+ """3D array with different dimension order should work."""
+ # 2x5x3 array should map to region x time x scenario
+ data_3d = np.random.rand(2, 5, 3)
+ result = DataConverter.to_dataarray(data_3d, coords=standard_coords)
+
+ assert result.shape == (5, 3, 2)
+ assert result.dims == ('time', 'scenario', 'region')
+ assert np.array_equal(result.transpose('region', 'time', 'scenario').values, data_3d)
+
+ def test_4d_array_unique_dimensions(self):
+ """4D array with unique dimension lengths should work."""
+ coords = {
+ 'time': pd.date_range('2024-01-01', periods=2, freq='D', name='time'), # length 2
+ 'scenario': pd.Index(['A', 'B', 'C'], name='scenario'), # length 3
+ 'region': pd.Index(['north', 'south', 'east', 'west'], name='region'), # length 4
+ 'technology': pd.Index(['solar', 'wind', 'gas', 'coal', 'hydro'], name='technology'), # length 5
+ }
+
+ # 3x5x2x4 array should map to scenario x technology x time x region
+ data_4d = np.random.rand(3, 5, 2, 4)
+ result = DataConverter.to_dataarray(data_4d, coords=coords)
+
+ assert result.shape == (2, 3, 4, 5)
+ assert result.dims == ('time', 'scenario', 'region', 'technology')
+ assert np.array_equal(result.transpose('scenario', 'technology', 'time', 'region').values, data_4d)
+
+ def test_2d_array_ambiguous_dimensions_error(self):
+ """2D array with ambiguous dimension lengths should fail."""
+ # Both dimensions have length 3
+ coords_ambiguous = {
+ 'scenario': pd.Index(['A', 'B', 'C'], name='scenario'), # length 3
+ 'region': pd.Index(['north', 'south', 'east'], name='region'), # length 3
+ }
+
+ data_2d = np.random.rand(3, 3)
+ with pytest.raises(ConversionError, match='matches multiple dimension combinations'):
+ DataConverter.to_dataarray(data_2d, coords=coords_ambiguous)
+
+ def test_multid_array_no_coords(self):
+ """Multi-D arrays without coords should fail unless scalar."""
+ # Multi-element fails
+ data_2d = np.random.rand(2, 3)
+ with pytest.raises(ConversionError, match='Cannot convert multi-element array without target dimensions'):
+ DataConverter.to_dataarray(data_2d)
+
+ # Single element succeeds
+ single_element = np.array([[42]])
+ result = DataConverter.to_dataarray(single_element)
+ assert result.shape == ()
+ assert result.item() == 42
+
+ def test_array_no_matching_dimensions_error(self, standard_coords):
+ """Array with no matching dimension lengths should fail."""
+ # 7x8 array - no dimension has length 7 or 8
+ data_2d = np.random.rand(7, 8)
+ coords_2d = {
+ 'time': standard_coords['time'], # length 5
+ 'scenario': standard_coords['scenario'], # length 3
+ }
+
+ with pytest.raises(ConversionError, match='cannot be mapped to any combination'):
+ DataConverter.to_dataarray(data_2d, coords=coords_2d)
+
+ def test_multid_array_special_values(self, standard_coords):
+ """Multi-D arrays should preserve special values."""
+ # Create 2D array with special values
+ data_2d = np.array(
+ [[1.0, np.nan, 3.0], [np.inf, 5.0, -np.inf], [7.0, 8.0, 9.0], [10.0, np.nan, 12.0], [13.0, 14.0, np.inf]]
+ )
+
+ result = DataConverter.to_dataarray(
+ data_2d, coords={'time': standard_coords['time'], 'scenario': standard_coords['scenario']}
+ )
+
+ assert result.shape == (5, 3)
+ assert np.array_equal(np.isnan(result.values), np.isnan(data_2d))
+ assert np.array_equal(np.isinf(result.values), np.isinf(data_2d))
+
+ def test_multid_array_dtype_preservation(self, standard_coords):
+ """Multi-D arrays should preserve data types."""
+ # Integer array
+ int_data = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12], [13, 14, 15]], dtype=np.int32)
+
+ result_int = DataConverter.to_dataarray(
+ int_data, coords={'time': standard_coords['time'], 'scenario': standard_coords['scenario']}
+ )
+
+ assert result_int.dtype == np.int32
+ assert np.array_equal(result_int.values, int_data)
+
+ # Boolean array
+ bool_data = np.array(
+ [[True, False, True], [False, True, False], [True, True, False], [False, False, True], [True, False, True]]
+ )
+
+ result_bool = DataConverter.to_dataarray(
+ bool_data, coords={'time': standard_coords['time'], 'scenario': standard_coords['scenario']}
+ )
+
+ assert result_bool.dtype == bool
+ assert np.array_equal(result_bool.values, bool_data)
+
+
+class TestDataArrayConversion:
+ """Test xarray DataArray conversions."""
+
+ def test_compatible_dataarray(self, time_coords):
+ """Compatible DataArray should pass through."""
+ original = xr.DataArray([10, 20, 30, 40, 50], coords={'time': time_coords}, dims='time')
+ result = DataConverter.to_dataarray(original, coords={'time': time_coords})
+
+ assert result.shape == (5,)
+ assert result.dims == ('time',)
+ assert np.array_equal(result.values, original.values)
+
+ # Should be a copy
+ result[0] = 999
+ assert original[0].item() == 10
+
+ def test_incompatible_dataarray_coords(self, time_coords):
+ """DataArray with wrong coordinates should fail."""
+ wrong_times = pd.date_range('2025-01-01', periods=5, freq='D', name='time')
+ original = xr.DataArray([10, 20, 30, 40, 50], coords={'time': wrong_times}, dims='time')
+
+ with pytest.raises(ConversionError):
+ DataConverter.to_dataarray(original, coords={'time': time_coords})
+
+ def test_incompatible_dataarray_dims(self, time_coords):
+ """DataArray with wrong dimensions should fail."""
+ original = xr.DataArray([10, 20, 30, 40, 50], coords={'wrong_dim': range(5)}, dims='wrong_dim')
+
+ with pytest.raises(ConversionError):
+ DataConverter.to_dataarray(original, coords={'time': time_coords})
+
+ def test_dataarray_broadcast(self, time_coords, scenario_coords):
+ """DataArray should broadcast to additional dimensions."""
+ # 1D time DataArray to 2D time+scenario
+ original = xr.DataArray([10, 20, 30, 40, 50], coords={'time': time_coords}, dims='time')
+ result = DataConverter.to_dataarray(original, coords={'time': time_coords, 'scenario': scenario_coords})
+
+ assert result.shape == (5, 3)
+ assert result.dims == ('time', 'scenario')
+
+ for scenario in scenario_coords:
+ assert np.array_equal(result.sel(scenario=scenario).values, original.values)
+
+ def test_scalar_dataarray_broadcast(self, time_coords, scenario_coords):
+ """Scalar DataArray should broadcast to all dimensions."""
+ scalar_da = xr.DataArray(42)
+ result = DataConverter.to_dataarray(scalar_da, coords={'time': time_coords, 'scenario': scenario_coords})
+
+ assert result.shape == (5, 3)
+ assert np.all(result.values == 42)
+
+ def test_2d_dataarray_broadcast_to_more_dimensions(self, standard_coords):
+ """DataArray should broadcast to additional dimensions."""
+ # Start with 2D DataArray
+ original = xr.DataArray(
+ [[10, 20, 30], [40, 50, 60], [70, 80, 90], [100, 110, 120], [130, 140, 150]],
+ coords={'time': standard_coords['time'], 'scenario': standard_coords['scenario']},
+ dims=('time', 'scenario'),
+ )
+
+ # Broadcast to 3D
+ result = DataConverter.to_dataarray(original, coords=standard_coords)
+
+ assert result.shape == (5, 3, 2)
+ assert result.dims == ('time', 'scenario', 'region')
+
+ # Check that all regions have the same time+scenario values
+ for region in standard_coords['region']:
+ assert np.array_equal(result.sel(region=region).values, original.values)
+
+
+class TestTimeSeriesDataConversion:
+ """Test TimeSeriesData conversions."""
+
+ def test_timeseries_data_basic(self, time_coords):
+ """TimeSeriesData should work like DataArray."""
+ data_array = xr.DataArray([10, 20, 30, 40, 50], coords={'time': time_coords}, dims='time')
+ ts_data = TimeSeriesData(data_array, aggregation_group='test')
+
+ result = DataConverter.to_dataarray(ts_data, coords={'time': time_coords})
+
+ assert result.shape == (5,)
+ assert result.dims == ('time',)
+ assert np.array_equal(result.values, [10, 20, 30, 40, 50])
+
+ def test_timeseries_data_broadcast(self, time_coords, scenario_coords):
+ """TimeSeriesData should broadcast to additional dimensions."""
+ data_array = xr.DataArray([10, 20, 30, 40, 50], coords={'time': time_coords}, dims='time')
+ ts_data = TimeSeriesData(data_array)
+
+ result = DataConverter.to_dataarray(ts_data, coords={'time': time_coords, 'scenario': scenario_coords})
+
+ assert result.shape == (5, 3)
+ for scenario in scenario_coords:
+ assert np.array_equal(result.sel(scenario=scenario).values, [10, 20, 30, 40, 50])
+
+
+class TestAsDataArrayAlias:
+ """Test that to_dataarray works as an alias for to_dataarray."""
+
+ def test_to_dataarray_is_alias(self, time_coords, scenario_coords):
+ """to_dataarray should work identically to to_dataarray."""
+ # Test with scalar
+ result_to = DataConverter.to_dataarray(42, coords={'time': time_coords})
+ result_as = DataConverter.to_dataarray(42, coords={'time': time_coords})
+ assert np.array_equal(result_to.values, result_as.values)
+ assert result_to.dims == result_as.dims
+ assert result_to.shape == result_as.shape
+
+ # Test with array
+ arr = np.array([10, 20, 30, 40, 50])
+ result_to_arr = DataConverter.to_dataarray(arr, coords={'time': time_coords})
+ result_as_arr = DataConverter.to_dataarray(arr, coords={'time': time_coords})
+ assert np.array_equal(result_to_arr.values, result_as_arr.values)
+ assert result_to_arr.dims == result_as_arr.dims
+
+ # Test with Series
+ series = pd.Series([100, 200, 300, 400, 500], index=time_coords)
+ result_to_series = DataConverter.to_dataarray(series, coords={'time': time_coords, 'scenario': scenario_coords})
+ result_as_series = DataConverter.to_dataarray(series, coords={'time': time_coords, 'scenario': scenario_coords})
+ assert np.array_equal(result_to_series.values, result_as_series.values)
+ assert result_to_series.dims == result_as_series.dims
+
+
+class TestCustomDimensions:
+ """Test with custom dimension names beyond time/scenario."""
+
+ def test_custom_single_dimension(self, region_coords):
+ """Test with custom dimension name."""
+ result = DataConverter.to_dataarray(42, coords={'region': region_coords})
+ assert result.shape == (3,)
+ assert result.dims == ('region',)
+ assert np.all(result.values == 42)
+
+ def test_custom_multiple_dimensions(self):
+ """Test with multiple custom dimensions."""
+ products = pd.Index(['A', 'B'], name='product')
+ technologies = pd.Index(['solar', 'wind', 'gas'], name='technology')
+
+ # Array matching technology dimension
+ arr = np.array([100, 150, 80])
+ result = DataConverter.to_dataarray(arr, coords={'product': products, 'technology': technologies})
+
+ assert result.shape == (2, 3)
+ assert result.dims == ('product', 'technology')
+
+ # Should broadcast across products
+ for product in products:
+ assert np.array_equal(result.sel(product=product).values, arr)
+
+ def test_mixed_dimension_types(self):
+ """Test mixing time dimension with custom dimensions."""
+ time_coords = pd.date_range('2024-01-01', periods=3, freq='D', name='time')
+ regions = pd.Index(['north', 'south'], name='region')
+
+ # Time series should broadcast to regions
+ time_series = pd.Series([10, 20, 30], index=time_coords)
+ result = DataConverter.to_dataarray(time_series, coords={'time': time_coords, 'region': regions})
+
+ assert result.shape == (3, 2)
+ assert result.dims == ('time', 'region')
+
+ def test_custom_dimensions_complex(self):
+ """Test complex scenario with custom dimensions."""
+ coords = {
+ 'product': pd.Index(['A', 'B'], name='product'),
+ 'factory': pd.Index(['F1', 'F2', 'F3'], name='factory'),
+ 'quarter': pd.Index(['Q1', 'Q2', 'Q3', 'Q4'], name='quarter'),
+ }
+
+ # Array matching factory dimension
+ factory_arr = np.array([100, 200, 300])
+ result = DataConverter.to_dataarray(factory_arr, coords=coords)
+
+ assert result.shape == (2, 3, 4)
+ assert result.dims == ('product', 'factory', 'quarter')
+
+ # Check broadcasting
+ for product in coords['product']:
+ for quarter in coords['quarter']:
+ slice_data = result.sel(product=product, quarter=quarter)
+ assert np.array_equal(slice_data.values, factory_arr)
+
+
+class TestValidation:
+ """Test coordinate validation."""
+
+ def test_empty_coords(self):
+ """Empty coordinates should work for scalars."""
+ result = DataConverter.to_dataarray(42, coords={})
+ assert result.shape == ()
+ assert result.item() == 42
+
+ def test_invalid_coord_type(self):
+ """Non-pandas Index coordinates should fail."""
+ with pytest.raises(ConversionError):
+ DataConverter.to_dataarray(42, coords={'time': [1, 2, 3]})
+
+ def test_empty_coord_index(self):
+ """Empty coordinate index should fail."""
+ empty_index = pd.Index([], name='time')
+ with pytest.raises(ConversionError):
+ DataConverter.to_dataarray(42, coords={'time': empty_index})
+
+ def test_time_coord_validation(self):
+ """Time coordinates must be DatetimeIndex."""
+ # Non-datetime index with name 'time' should fail
+ wrong_time = pd.Index([1, 2, 3], name='time')
+ with pytest.raises(ConversionError, match='DatetimeIndex'):
+ DataConverter.to_dataarray(42, coords={'time': wrong_time})
+
+ def test_coord_naming(self, time_coords):
+ """Coordinates should be auto-renamed to match dimension."""
+ # Unnamed time index should be renamed
+ unnamed_time = time_coords.rename(None)
+ result = DataConverter.to_dataarray(42, coords={'time': unnamed_time})
+ assert result.coords['time'].name == 'time'
+
+
+class TestErrorHandling:
+ """Test error handling and edge cases."""
+
+ def test_unsupported_data_types(self, time_coords):
+ """Unsupported data types should fail with clear messages."""
+ unsupported = ['string', object(), None, {'dict': 'value'}, [1, 2, 3]]
+
+ for data in unsupported:
+ with pytest.raises(ConversionError):
+ DataConverter.to_dataarray(data, coords={'time': time_coords})
+
+ def test_dimension_mismatch_messages(self, time_coords, scenario_coords):
+ """Error messages should be informative."""
+ # Array with wrong length
+ wrong_arr = np.array([1, 2]) # Length 2, but no dimension has length 2
+ with pytest.raises(ConversionError, match='does not match any target dimension lengths'):
+ DataConverter.to_dataarray(wrong_arr, coords={'time': time_coords, 'scenario': scenario_coords})
+
+ def test_multidimensional_array_dimension_count_mismatch(self, standard_coords):
+ """Array with wrong number of dimensions should fail with clear error."""
+ # 4D array with 3D coordinates
+ data_4d = np.random.rand(5, 3, 2, 4)
+ with pytest.raises(ConversionError, match='cannot be mapped to any combination'):
+ DataConverter.to_dataarray(data_4d, coords=standard_coords)
+
+ def test_error_message_quality(self, standard_coords):
+ """Error messages should include helpful information."""
+ # Wrong shape array
+ data_2d = np.random.rand(7, 8)
+ coords_2d = {
+ 'time': standard_coords['time'], # length 5
+ 'scenario': standard_coords['scenario'], # length 3
+ }
+
+ try:
+ DataConverter.to_dataarray(data_2d, coords=coords_2d)
+ raise AssertionError('Should have raised ConversionError')
+ except ConversionError as e:
+ error_msg = str(e)
+ assert 'Array shape (7, 8)' in error_msg
+ assert 'target coordinate lengths:' in error_msg
+
+
+class TestDataIntegrity:
+ """Test data copying and integrity."""
+
+ def test_array_copy_independence(self, time_coords):
+ """Converted arrays should be independent copies."""
+ original_arr = np.array([10, 20, 30, 40, 50])
+ result = DataConverter.to_dataarray(original_arr, coords={'time': time_coords})
+
+ # Modify result
+ result[0] = 999
+
+ # Original should be unchanged
+ assert original_arr[0] == 10
+
+ def test_series_copy_independence(self, time_coords):
+ """Converted Series should be independent copies."""
+ original_series = pd.Series([10, 20, 30, 40, 50], index=time_coords)
+ result = DataConverter.to_dataarray(original_series, coords={'time': time_coords})
+
+ # Modify result
+ result[0] = 999
+
+ # Original should be unchanged
+ assert original_series.iloc[0] == 10
+
+ def test_dataframe_copy_independence(self, time_coords):
+ """Converted DataFrames should be independent copies."""
+ original_df = pd.DataFrame({'value': [10, 20, 30, 40, 50]}, index=time_coords)
+ result = DataConverter.to_dataarray(original_df, coords={'time': time_coords})
+
+ # Modify result
+ result[0] = 999
+
+ # Original should be unchanged
+ assert original_df.loc[time_coords[0], 'value'] == 10
+
+ def test_multid_array_copy_independence(self, standard_coords):
+ """Multi-D arrays should be independent copies."""
+ original_data = np.random.rand(5, 3)
+ result = DataConverter.to_dataarray(
+ original_data, coords={'time': standard_coords['time'], 'scenario': standard_coords['scenario']}
+ )
+
+ # Modify result
+ result[0, 0] = 999
+
+ # Original should be unchanged
+ assert original_data[0, 0] != 999
+
+
+class TestBooleanValues:
+ """Test handling of boolean values and arrays."""
+
+ def test_scalar_boolean_to_dataarray(self, time_coords):
+ """Scalar boolean values should work with to_dataarray."""
+ result_true = DataConverter.to_dataarray(True, coords={'time': time_coords})
+ assert result_true.shape == (5,)
+ assert result_true.dtype == bool
+ assert np.all(result_true.values)
+
+ result_false = DataConverter.to_dataarray(False, coords={'time': time_coords})
+ assert result_false.shape == (5,)
+ assert result_false.dtype == bool
+ assert not np.any(result_false.values)
+
+ def test_numpy_boolean_scalar(self, time_coords):
+ """Numpy boolean scalars should work."""
+ result_np_true = DataConverter.to_dataarray(np.bool_(True), coords={'time': time_coords})
+ assert result_np_true.shape == (5,)
+ assert result_np_true.dtype == bool
+ assert np.all(result_np_true.values)
+
+ result_np_false = DataConverter.to_dataarray(np.bool_(False), coords={'time': time_coords})
+ assert result_np_false.shape == (5,)
+ assert result_np_false.dtype == bool
+ assert not np.any(result_np_false.values)
+
+ def test_boolean_array_to_dataarray(self, time_coords):
+ """Boolean arrays should work with to_dataarray."""
+ bool_arr = np.array([True, False, True, False, True])
+ result = DataConverter.to_dataarray(bool_arr, coords={'time': time_coords})
+ assert result.shape == (5,)
+ assert result.dims == ('time',)
+ assert result.dtype == bool
+ assert np.array_equal(result.values, bool_arr)
+
+ def test_boolean_no_coords(self):
+ """Boolean scalar without coordinates should create 0D DataArray."""
+ result = DataConverter.to_dataarray(True)
+ assert result.shape == ()
+ assert result.dims == ()
+ assert result.item()
+
+ result_as = DataConverter.to_dataarray(False)
+ assert result_as.shape == ()
+ assert result_as.dims == ()
+ assert not result_as.item()
+
+ def test_boolean_multidimensional_broadcast(self, standard_coords):
+ """Boolean values should broadcast to multiple dimensions."""
+ result = DataConverter.to_dataarray(True, coords=standard_coords)
+ assert result.shape == (5, 3, 2)
+ assert result.dims == ('time', 'scenario', 'region')
+ assert result.dtype == bool
+ assert np.all(result.values)
+
+ result_as = DataConverter.to_dataarray(False, coords=standard_coords)
+ assert result_as.shape == (5, 3, 2)
+ assert result_as.dims == ('time', 'scenario', 'region')
+ assert result_as.dtype == bool
+ assert not np.any(result_as.values)
+
+ def test_boolean_series(self, time_coords):
+ """Boolean Series should work."""
+ bool_series = pd.Series([True, False, True, False, True], index=time_coords)
+ result = DataConverter.to_dataarray(bool_series, coords={'time': time_coords})
+ assert result.shape == (5,)
+ assert result.dtype == bool
+ assert np.array_equal(result.values, bool_series.values)
+
+ result_as = DataConverter.to_dataarray(bool_series, coords={'time': time_coords})
+ assert result_as.shape == (5,)
+ assert result_as.dtype == bool
+ assert np.array_equal(result_as.values, bool_series.values)
+
+ def test_boolean_dataframe(self, time_coords):
+ """Boolean DataFrame should work."""
+ bool_df = pd.DataFrame({'values': [True, False, True, False, True]}, index=time_coords)
+ result = DataConverter.to_dataarray(bool_df, coords={'time': time_coords})
+ assert result.shape == (5,)
+ assert result.dtype == bool
+ assert np.array_equal(result.values, bool_df['values'].values)
+
+ result_as = DataConverter.to_dataarray(bool_df, coords={'time': time_coords})
+ assert result_as.shape == (5,)
+ assert result_as.dtype == bool
+ assert np.array_equal(result_as.values, bool_df['values'].values)
+
+ def test_multidimensional_boolean_array(self, standard_coords):
+ """Multi-dimensional boolean arrays should work."""
+ bool_data = np.array(
+ [[True, False, True], [False, True, False], [True, True, False], [False, False, True], [True, False, True]]
+ )
+ result = DataConverter.to_dataarray(
+ bool_data, coords={'time': standard_coords['time'], 'scenario': standard_coords['scenario']}
+ )
+ assert result.shape == (5, 3)
+ assert result.dtype == bool
+ assert np.array_equal(result.values, bool_data)
+
+ result_as = DataConverter.to_dataarray(
+ bool_data, coords={'time': standard_coords['time'], 'scenario': standard_coords['scenario']}
+ )
+ assert result_as.shape == (5, 3)
+ assert result_as.dtype == bool
+ assert np.array_equal(result_as.values, bool_data)
+
+
+class TestSpecialValues:
+ """Test handling of special numeric values."""
+
+ def test_nan_values(self, time_coords):
+ """NaN values should be preserved."""
+ arr_with_nan = np.array([1, np.nan, 3, np.nan, 5])
+ result = DataConverter.to_dataarray(arr_with_nan, coords={'time': time_coords})
+
+ assert np.array_equal(np.isnan(result.values), np.isnan(arr_with_nan))
+ assert np.array_equal(result.values[~np.isnan(result.values)], arr_with_nan[~np.isnan(arr_with_nan)])
+
+ def test_infinite_values(self, time_coords):
+ """Infinite values should be preserved."""
+ arr_with_inf = np.array([1, np.inf, 3, -np.inf, 5])
+ result = DataConverter.to_dataarray(arr_with_inf, coords={'time': time_coords})
+
+ assert np.array_equal(result.values, arr_with_inf)
+
+ def test_boolean_values(self, time_coords):
+ """Boolean values should be preserved."""
+ bool_arr = np.array([True, False, True, False, True])
+ result = DataConverter.to_dataarray(bool_arr, coords={'time': time_coords})
+
+ assert result.dtype == bool
+ assert np.array_equal(result.values, bool_arr)
+
+ def test_mixed_numeric_types(self, time_coords):
+ """Mixed integer/float should become float."""
+ mixed_arr = np.array([1, 2.5, 3, 4.5, 5])
+ result = DataConverter.to_dataarray(mixed_arr, coords={'time': time_coords})
+
+ assert np.issubdtype(result.dtype, np.floating)
+ assert np.array_equal(result.values, mixed_arr)
+
+ def test_special_values_in_multid_arrays(self, standard_coords):
+ """Special values should be preserved in multi-D arrays and broadcasting."""
+ # Array with NaN and inf
+ special_arr = np.array([1, np.nan, np.inf, -np.inf, 5])
+ result = DataConverter.to_dataarray(special_arr, coords=standard_coords)
+
+ assert result.shape == (5, 3, 2)
+
+ # Check that special values are preserved in all broadcasts
+ for scenario in standard_coords['scenario']:
+ for region in standard_coords['region']:
+ slice_data = result.sel(scenario=scenario, region=region)
+ assert np.array_equal(np.isnan(slice_data.values), np.isnan(special_arr))
+ assert np.array_equal(np.isinf(slice_data.values), np.isinf(special_arr))
+
+
+class TestAdvancedBroadcasting:
+ """Test advanced broadcasting scenarios and edge cases."""
+
+ def test_partial_dimension_matching_with_broadcasting(self, standard_coords):
+ """Test that partial dimension matching works with the improved integration."""
+ # 1D array matching one dimension should broadcast to all target dimensions
+ time_arr = np.array([10, 20, 30, 40, 50]) # matches time (length 5)
+ result = DataConverter.to_dataarray(time_arr, coords=standard_coords)
+
+ assert result.shape == (5, 3, 2)
+ assert result.dims == ('time', 'scenario', 'region')
+
+ # Verify broadcasting
+ for scenario in standard_coords['scenario']:
+ for region in standard_coords['region']:
+ assert np.array_equal(result.sel(scenario=scenario, region=region).values, time_arr)
+
+ def test_complex_multid_scenario(self):
+ """Complex real-world scenario with multi-D array and broadcasting."""
+ # Energy system data: time x technology, broadcast to regions
+ coords = {
+ 'time': pd.date_range('2024-01-01', periods=24, freq='h', name='time'), # 24 hours
+ 'technology': pd.Index(['solar', 'wind', 'gas', 'coal'], name='technology'), # 4 technologies
+ 'region': pd.Index(['north', 'south', 'east'], name='region'), # 3 regions
+ }
+
+ # Capacity factors: 24 x 4 (will broadcast to 24 x 4 x 3)
+ capacity_factors = np.random.rand(24, 4)
+
+ result = DataConverter.to_dataarray(capacity_factors, coords=coords)
+
+ assert result.shape == (24, 4, 3)
+ assert result.dims == ('time', 'technology', 'region')
+ assert isinstance(result.indexes['time'], pd.DatetimeIndex)
+
+ # Verify broadcasting: all regions should have same time×technology data
+ for region in coords['region']:
+ assert np.array_equal(result.sel(region=region).values, capacity_factors)
+
+ def test_ambiguous_length_handling(self):
+ """Test handling of ambiguous length scenarios across different data types."""
+ # All dimensions have length 3
+ coords_3x3x3 = {
+ 'time': pd.date_range('2024-01-01', periods=3, freq='D', name='time'),
+ 'scenario': pd.Index(['A', 'B', 'C'], name='scenario'),
+ 'region': pd.Index(['X', 'Y', 'Z'], name='region'),
+ }
+
+ # 1D array - should fail
+ arr_1d = np.array([1, 2, 3])
+ with pytest.raises(ConversionError, match='matches multiple dimension'):
+ DataConverter.to_dataarray(arr_1d, coords=coords_3x3x3)
+
+ # 2D array - should fail
+ arr_2d = np.random.rand(3, 3)
+ with pytest.raises(ConversionError, match='matches multiple dimension'):
+ DataConverter.to_dataarray(arr_2d, coords=coords_3x3x3)
+
+ # 3D array - should fail
+ arr_3d = np.random.rand(3, 3, 3)
+ with pytest.raises(ConversionError, match='matches multiple dimension'):
+ DataConverter.to_dataarray(arr_3d, coords=coords_3x3x3)
+
+ def test_mixed_broadcasting_scenarios(self):
+ """Test various broadcasting scenarios with different input types."""
+ coords = {
+ 'time': pd.date_range('2024-01-01', periods=4, freq='D', name='time'), # length 4
+ 'scenario': pd.Index(['A', 'B'], name='scenario'), # length 2
+ 'region': pd.Index(['north', 'south', 'east'], name='region'), # length 3
+ 'product': pd.Index(['X', 'Y', 'Z', 'W', 'V'], name='product'), # length 5
+ }
+
+ # Scalar to 4D
+ scalar_result = DataConverter.to_dataarray(42, coords=coords)
+ assert scalar_result.shape == (4, 2, 3, 5)
+ assert np.all(scalar_result.values == 42)
+
+ # 1D array (length 4, matches time) to 4D
+ arr_1d = np.array([10, 20, 30, 40])
+ arr_result = DataConverter.to_dataarray(arr_1d, coords=coords)
+ assert arr_result.shape == (4, 2, 3, 5)
+ # Verify broadcasting
+ for scenario in coords['scenario']:
+ for region in coords['region']:
+ for product in coords['product']:
+ assert np.array_equal(
+ arr_result.sel(scenario=scenario, region=region, product=product).values, arr_1d
+ )
+
+ # 2D array (4x2, matches time×scenario) to 4D
+ arr_2d = np.random.rand(4, 2)
+ arr_2d_result = DataConverter.to_dataarray(arr_2d, coords=coords)
+ assert arr_2d_result.shape == (4, 2, 3, 5)
+ # Verify broadcasting
+ for region in coords['region']:
+ for product in coords['product']:
+ assert np.array_equal(arr_2d_result.sel(region=region, product=product).values, arr_2d)
+
+
+class TestAmbiguousDimensionLengthHandling:
+ """Test that DataConverter correctly raises errors when multiple dimensions have the same length."""
+
+ def test_1d_array_ambiguous_dimensions_simple(self):
+ """Test 1D array with two dimensions of same length should fail."""
+ # Both dimensions have length 3
+ coords_ambiguous = {
+ 'scenario': pd.Index(['A', 'B', 'C'], name='scenario'), # length 3
+ 'region': pd.Index(['north', 'south', 'east'], name='region'), # length 3
+ }
+
+ arr_1d = np.array([1, 2, 3]) # length 3 - matches both dimensions
+
+ with pytest.raises(ConversionError, match='matches multiple dimension'):
+ DataConverter.to_dataarray(arr_1d, coords=coords_ambiguous)
+
+ def test_1d_array_ambiguous_dimensions_complex(self):
+ """Test 1D array with multiple dimensions of same length."""
+ # Three dimensions have length 4
+ coords_4x4x4 = {
+ 'time': pd.date_range('2024-01-01', periods=4, freq='D', name='time'), # length 4
+ 'scenario': pd.Index(['A', 'B', 'C', 'D'], name='scenario'), # length 4
+ 'region': pd.Index(['north', 'south', 'east', 'west'], name='region'), # length 4
+ 'product': pd.Index(['X', 'Y'], name='product'), # length 2 - unique
+ }
+
+ # Array matching the ambiguous length
+ arr_1d = np.array([10, 20, 30, 40]) # length 4 - matches time, scenario, region
+
+ with pytest.raises(ConversionError, match='matches multiple dimension'):
+ DataConverter.to_dataarray(arr_1d, coords=coords_4x4x4)
+
+ # Array matching the unique length should work
+ arr_1d_unique = np.array([100, 200]) # length 2 - matches only product
+ result = DataConverter.to_dataarray(arr_1d_unique, coords=coords_4x4x4)
+ assert result.shape == (4, 4, 4, 2) # broadcast to all dimensions
+ assert result.dims == ('time', 'scenario', 'region', 'product')
+
+ def test_2d_array_ambiguous_dimensions_both_same(self):
+ """Test 2D array where both dimensions have the same ambiguous length."""
+ # All dimensions have length 3
+ coords_3x3x3 = {
+ 'time': pd.date_range('2024-01-01', periods=3, freq='D', name='time'), # length 3
+ 'scenario': pd.Index(['A', 'B', 'C'], name='scenario'), # length 3
+ 'region': pd.Index(['X', 'Y', 'Z'], name='region'), # length 3
+ }
+
+ # 3x3 array - could be any combination of the three dimensions
+ arr_2d = np.random.rand(3, 3)
+
+ with pytest.raises(ConversionError, match='matches multiple dimension'):
+ DataConverter.to_dataarray(arr_2d, coords=coords_3x3x3)
+
+ def test_2d_array_one_dimension_ambiguous(self):
+ """Test 2D array where only one dimension length is ambiguous."""
+ coords_mixed = {
+ 'time': pd.date_range('2024-01-01', periods=5, freq='D', name='time'), # length 5 - unique
+ 'scenario': pd.Index(['A', 'B', 'C'], name='scenario'), # length 3
+ 'region': pd.Index(['X', 'Y', 'Z'], name='region'), # length 3 - same as scenario
+ 'product': pd.Index(['P1', 'P2'], name='product'), # length 2 - unique
+ }
+
+ # 5x3 array - first dimension clearly maps to time (unique length 5)
+ # but second dimension could be scenario or region (both length 3)
+ arr_5x3 = np.random.rand(5, 3)
+
+ with pytest.raises(ConversionError, match='matches multiple dimension'):
+ DataConverter.to_dataarray(arr_5x3, coords=coords_mixed)
+
+ # 5x2 array should work - dimensions are unambiguous
+ arr_5x2 = np.random.rand(5, 2)
+ result = DataConverter.to_dataarray(
+ arr_5x2, coords={'time': coords_mixed['time'], 'product': coords_mixed['product']}
+ )
+ assert result.shape == (5, 2)
+ assert result.dims == ('time', 'product')
+
+ def test_3d_array_all_dimensions_ambiguous(self):
+ """Test 3D array where all dimension lengths are ambiguous."""
+ # All dimensions have length 2
+ coords_2x2x2x2 = {
+ 'scenario': pd.Index(['A', 'B'], name='scenario'), # length 2
+ 'region': pd.Index(['north', 'south'], name='region'), # length 2
+ 'technology': pd.Index(['solar', 'wind'], name='technology'), # length 2
+ 'product': pd.Index(['X', 'Y'], name='product'), # length 2
+ }
+
+ # 2x2x2 array - could be any combination of 3 dimensions from the 4 available
+ arr_3d = np.random.rand(2, 2, 2)
+
+ with pytest.raises(ConversionError, match='matches multiple dimension'):
+ DataConverter.to_dataarray(arr_3d, coords=coords_2x2x2x2)
+
+ def test_3d_array_partial_ambiguity(self):
+ """Test 3D array with partial dimension ambiguity."""
+ coords_partial = {
+ 'time': pd.date_range('2024-01-01', periods=4, freq='D', name='time'), # length 4 - unique
+ 'scenario': pd.Index(['A', 'B', 'C'], name='scenario'), # length 3
+ 'region': pd.Index(['X', 'Y', 'Z'], name='region'), # length 3 - same as scenario
+ 'technology': pd.Index(['solar', 'wind'], name='technology'), # length 2 - unique
+ }
+
+ # 4x3x2 array - first and third dimensions are unique, middle is ambiguous
+ # This should still fail because middle dimension (length 3) could be scenario or region
+ arr_4x3x2 = np.random.rand(4, 3, 2)
+
+ with pytest.raises(ConversionError, match='matches multiple dimension'):
+ DataConverter.to_dataarray(arr_4x3x2, coords=coords_partial)
+
+ def test_pandas_series_ambiguous_dimensions(self):
+ """Test pandas Series with ambiguous dimension lengths."""
+ coords_ambiguous = {
+ 'scenario': pd.Index(['A', 'B', 'C'], name='scenario'), # length 3
+ 'region': pd.Index(['north', 'south', 'east'], name='region'), # length 3
+ }
+
+ # Series with length 3 but index that doesn't match either coordinate exactly
+ generic_series = pd.Series([10, 20, 30], index=[0, 1, 2])
+
+ # Should fail because length matches multiple dimensions and index doesn't match any
+ with pytest.raises(ConversionError, match='Series index does not match any target dimension coordinates'):
+ DataConverter.to_dataarray(generic_series, coords=coords_ambiguous)
+
+ # Series with index that matches one of the ambiguous coordinates should work
+ scenario_series = pd.Series([10, 20, 30], index=coords_ambiguous['scenario'])
+ result = DataConverter.to_dataarray(scenario_series, coords=coords_ambiguous)
+ assert result.shape == (3, 3) # should broadcast to both dimensions
+ assert result.dims == ('scenario', 'region')
+
+ def test_edge_case_many_same_lengths(self):
+ """Test edge case with many dimensions having the same length."""
+ # Five dimensions all have length 2
+ coords_many = {
+ 'dim1': pd.Index(['A', 'B'], name='dim1'),
+ 'dim2': pd.Index(['X', 'Y'], name='dim2'),
+ 'dim3': pd.Index(['P', 'Q'], name='dim3'),
+ 'dim4': pd.Index(['M', 'N'], name='dim4'),
+ 'dim5': pd.Index(['U', 'V'], name='dim5'),
+ }
+
+ # 1D array
+ arr_1d = np.array([1, 2])
+ with pytest.raises(ConversionError, match='matches multiple dimension'):
+ DataConverter.to_dataarray(arr_1d, coords=coords_many)
+
+ # 2D array
+ arr_2d = np.random.rand(2, 2)
+ with pytest.raises(ConversionError, match='matches multiple dimension'):
+ DataConverter.to_dataarray(arr_2d, coords=coords_many)
+
+ # 3D array
+ arr_3d = np.random.rand(2, 2, 2)
+ with pytest.raises(ConversionError, match='matches multiple dimension'):
+ DataConverter.to_dataarray(arr_3d, coords=coords_many)
+
+ def test_mixed_lengths_with_duplicates(self):
+ """Test mixed scenario with some duplicate and some unique lengths."""
+ coords_mixed = {
+ 'time': pd.date_range('2024-01-01', periods=8, freq='D', name='time'), # length 8 - unique
+ 'scenario': pd.Index(['A', 'B', 'C'], name='scenario'), # length 3
+ 'region': pd.Index(['X', 'Y', 'Z'], name='region'), # length 3 - same as scenario
+ 'technology': pd.Index(['solar'], name='technology'), # length 1 - unique
+ 'product': pd.Index(['P1', 'P2', 'P3', 'P4', 'P5'], name='product'), # length 5 - unique
+ }
+
+ # Arrays with unique lengths should work
+ arr_8 = np.arange(8)
+ result_8 = DataConverter.to_dataarray(arr_8, coords=coords_mixed)
+ assert result_8.dims == ('time', 'scenario', 'region', 'technology', 'product')
+
+ arr_1 = np.array([42])
+ result_1 = DataConverter.to_dataarray(arr_1, coords={'technology': coords_mixed['technology']})
+ assert result_1.shape == (1,)
+
+ arr_5 = np.arange(5)
+ result_5 = DataConverter.to_dataarray(arr_5, coords={'product': coords_mixed['product']})
+ assert result_5.shape == (5,)
+
+ # Arrays with ambiguous length should fail
+ arr_3 = np.array([1, 2, 3]) # matches both scenario and region
+ with pytest.raises(ConversionError, match='matches multiple dimension'):
+ DataConverter.to_dataarray(arr_3, coords=coords_mixed)
+
+ def test_dataframe_with_ambiguous_dimensions(self):
+ """Test DataFrame handling with ambiguous dimensions."""
+ coords_ambiguous = {
+ 'scenario': pd.Index(['A', 'B', 'C'], name='scenario'), # length 3
+ 'region': pd.Index(['X', 'Y', 'Z'], name='region'), # length 3
+ }
+
+ # Multi-column DataFrame with ambiguous dimensions
+ df = pd.DataFrame({'col1': [1, 2, 3], 'col2': [4, 5, 6], 'col3': [7, 8, 9]}) # 3x3 DataFrame
+
+ # Should fail due to ambiguous dimensions
+ with pytest.raises(ConversionError, match='matches multiple dimension'):
+ DataConverter.to_dataarray(df, coords=coords_ambiguous)
+
+ def test_error_message_quality_for_ambiguous_dimensions(self):
+ """Test that error messages for ambiguous dimensions are helpful."""
+ coords_ambiguous = {
+ 'scenario': pd.Index(['A', 'B', 'C'], name='scenario'),
+ 'region': pd.Index(['north', 'south', 'east'], name='region'),
+ 'technology': pd.Index(['solar', 'wind', 'gas'], name='technology'),
+ }
+
+ # 1D array case
+ arr_1d = np.array([1, 2, 3])
+ try:
+ DataConverter.to_dataarray(arr_1d, coords=coords_ambiguous)
+ raise AssertionError('Should have raised ConversionError')
+ except ConversionError as e:
+ error_msg = str(e)
+ assert 'matches multiple dimension' in error_msg
+ assert 'scenario' in error_msg
+ assert 'region' in error_msg
+ assert 'technology' in error_msg
+
+ # 2D array case
+ arr_2d = np.random.rand(3, 3)
+ try:
+ DataConverter.to_dataarray(arr_2d, coords=coords_ambiguous)
+ raise AssertionError('Should have raised ConversionError')
+ except ConversionError as e:
+ error_msg = str(e)
+ assert 'matches multiple dimension combinations' in error_msg
+ assert '(3, 3)' in error_msg
+
+ def test_ambiguous_with_broadcasting_target(self):
+ """Test ambiguous dimensions when target includes broadcasting."""
+ coords_ambiguous_plus = {
+ 'time': pd.date_range('2024-01-01', periods=5, freq='D', name='time'), # length 5
+ 'scenario': pd.Index(['A', 'B', 'C'], name='scenario'), # length 3
+ 'region': pd.Index(['X', 'Y', 'Z'], name='region'), # length 3 - same as scenario
+ 'technology': pd.Index(['solar', 'wind'], name='technology'), # length 2
+ }
+
+ # 1D array with ambiguous length, but targeting broadcast scenario
+ arr_3 = np.array([10, 20, 30]) # length 3, matches scenario and region
+
+ # Should fail even though it would broadcast to other dimensions
+ with pytest.raises(ConversionError, match='matches multiple dimension'):
+ DataConverter.to_dataarray(arr_3, coords=coords_ambiguous_plus)
+
+ # 2D array with one ambiguous dimension
+ arr_5x3 = np.random.rand(5, 3) # 5 is unique (time), 3 is ambiguous (scenario/region)
+
+ with pytest.raises(ConversionError, match='matches multiple dimension'):
+ DataConverter.to_dataarray(arr_5x3, coords=coords_ambiguous_plus)
+
+ def test_time_dimension_ambiguity(self):
+ """Test ambiguity specifically involving time dimension."""
+ # Create scenario where time has same length as another dimension
+ coords_time_ambiguous = {
+ 'time': pd.date_range('2024-01-01', periods=3, freq='D', name='time'), # length 3
+ 'scenario': pd.Index(['base', 'high', 'low'], name='scenario'), # length 3 - same as time
+ 'region': pd.Index(['north', 'south'], name='region'), # length 2 - unique
+ }
+
+ # Time-indexed series should work even with ambiguous lengths (index matching takes precedence)
+ time_series = pd.Series([100, 200, 300], index=coords_time_ambiguous['time'])
+ result = DataConverter.to_dataarray(time_series, coords=coords_time_ambiguous)
+ assert result.shape == (3, 3, 2)
+ assert result.dims == ('time', 'scenario', 'region')
+
+ # But generic array with length 3 should still fail
+ generic_array = np.array([100, 200, 300])
+ with pytest.raises(ConversionError, match='matches multiple dimension'):
+ DataConverter.to_dataarray(generic_array, coords=coords_time_ambiguous)
if __name__ == '__main__':
diff --git a/tests/test_effect.py b/tests/test_effect.py
index b4a618ea6..cd3edc537 100644
--- a/tests/test_effect.py
+++ b/tests/test_effect.py
@@ -1,163 +1,342 @@
+import numpy as np
+import xarray as xr
+
import flixopt as fx
-from .conftest import assert_conequal, assert_var_equal, create_linopy_model
+from .conftest import (
+ assert_conequal,
+ assert_sets_equal,
+ assert_var_equal,
+ create_calculation_and_solve,
+ create_linopy_model,
+)
-class TestBusModel:
+class TestEffectModel:
"""Test the FlowModel class."""
- def test_minimal(self, basic_flow_system_linopy):
- flow_system = basic_flow_system_linopy
- timesteps = flow_system.time_series_collection.timesteps
+ def test_minimal(self, basic_flow_system_linopy_coords, coords_config):
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
effect = fx.Effect('Effect1', '€', 'Testing Effect')
flow_system.add_elements(effect)
model = create_linopy_model(flow_system)
- assert set(effect.model.variables) == {
- 'Effect1(invest)|total',
- 'Effect1(operation)|total',
- 'Effect1(operation)|total_per_timestep',
- 'Effect1|total',
- }
- assert set(effect.model.constraints) == {
- 'Effect1(invest)|total',
- 'Effect1(operation)|total',
- 'Effect1(operation)|total_per_timestep',
- 'Effect1|total',
- }
+ assert_sets_equal(
+ set(effect.submodel.variables),
+ {
+ 'Effect1(periodic)',
+ 'Effect1(temporal)',
+ 'Effect1(temporal)|per_timestep',
+ 'Effect1',
+ },
+ msg='Incorrect variables',
+ )
- assert_var_equal(model.variables['Effect1|total'], model.add_variables())
- assert_var_equal(model.variables['Effect1(invest)|total'], model.add_variables())
- assert_var_equal(model.variables['Effect1(operation)|total'], model.add_variables())
+ assert_sets_equal(
+ set(effect.submodel.constraints),
+ {
+ 'Effect1(periodic)',
+ 'Effect1(temporal)',
+ 'Effect1(temporal)|per_timestep',
+ 'Effect1',
+ },
+ msg='Incorrect constraints',
+ )
+
+ assert_var_equal(
+ model.variables['Effect1'], model.add_variables(coords=model.get_coords(['period', 'scenario']))
+ )
+ assert_var_equal(
+ model.variables['Effect1(periodic)'], model.add_variables(coords=model.get_coords(['period', 'scenario']))
+ )
+ assert_var_equal(
+ model.variables['Effect1(temporal)'],
+ model.add_variables(coords=model.get_coords(['period', 'scenario'])),
+ )
assert_var_equal(
- model.variables['Effect1(operation)|total_per_timestep'], model.add_variables(coords=(timesteps,))
+ model.variables['Effect1(temporal)|per_timestep'], model.add_variables(coords=model.get_coords())
)
assert_conequal(
- model.constraints['Effect1|total'],
- model.variables['Effect1|total']
- == model.variables['Effect1(operation)|total'] + model.variables['Effect1(invest)|total'],
+ model.constraints['Effect1'],
+ model.variables['Effect1'] == model.variables['Effect1(temporal)'] + model.variables['Effect1(periodic)'],
)
- assert_conequal(model.constraints['Effect1(invest)|total'], model.variables['Effect1(invest)|total'] == 0)
+ # In minimal/bounds tests with no contributing components, periodic totals should be zero
+ assert_conequal(model.constraints['Effect1(periodic)'], model.variables['Effect1(periodic)'] == 0)
assert_conequal(
- model.constraints['Effect1(operation)|total'],
- model.variables['Effect1(operation)|total']
- == model.variables['Effect1(operation)|total_per_timestep'].sum(),
+ model.constraints['Effect1(temporal)'],
+ model.variables['Effect1(temporal)'] == model.variables['Effect1(temporal)|per_timestep'].sum('time'),
)
assert_conequal(
- model.constraints['Effect1(operation)|total_per_timestep'],
- model.variables['Effect1(operation)|total_per_timestep'] == 0,
+ model.constraints['Effect1(temporal)|per_timestep'],
+ model.variables['Effect1(temporal)|per_timestep'] == 0,
)
- def test_bounds(self, basic_flow_system_linopy):
- flow_system = basic_flow_system_linopy
- timesteps = flow_system.time_series_collection.timesteps
+ def test_bounds(self, basic_flow_system_linopy_coords, coords_config):
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
effect = fx.Effect(
'Effect1',
'€',
'Testing Effect',
- minimum_operation=1.0,
- maximum_operation=1.1,
- minimum_invest=2.0,
- maximum_invest=2.1,
+ minimum_temporal=1.0,
+ maximum_temporal=1.1,
+ minimum_periodic=2.0,
+ maximum_periodic=2.1,
minimum_total=3.0,
maximum_total=3.1,
- minimum_operation_per_hour=4.0,
- maximum_operation_per_hour=4.1,
+ minimum_per_hour=4.0,
+ maximum_per_hour=4.1,
)
flow_system.add_elements(effect)
model = create_linopy_model(flow_system)
- assert set(effect.model.variables) == {
- 'Effect1(invest)|total',
- 'Effect1(operation)|total',
- 'Effect1(operation)|total_per_timestep',
- 'Effect1|total',
- }
- assert set(effect.model.constraints) == {
- 'Effect1(invest)|total',
- 'Effect1(operation)|total',
- 'Effect1(operation)|total_per_timestep',
- 'Effect1|total',
- }
+ assert_sets_equal(
+ set(effect.submodel.variables),
+ {
+ 'Effect1(periodic)',
+ 'Effect1(temporal)',
+ 'Effect1(temporal)|per_timestep',
+ 'Effect1',
+ },
+ msg='Incorrect variables',
+ )
+
+ assert_sets_equal(
+ set(effect.submodel.constraints),
+ {
+ 'Effect1(periodic)',
+ 'Effect1(temporal)',
+ 'Effect1(temporal)|per_timestep',
+ 'Effect1',
+ },
+ msg='Incorrect constraints',
+ )
- assert_var_equal(model.variables['Effect1|total'], model.add_variables(lower=3.0, upper=3.1))
- assert_var_equal(model.variables['Effect1(invest)|total'], model.add_variables(lower=2.0, upper=2.1))
- assert_var_equal(model.variables['Effect1(operation)|total'], model.add_variables(lower=1.0, upper=1.1))
assert_var_equal(
- model.variables['Effect1(operation)|total_per_timestep'],
+ model.variables['Effect1'],
+ model.add_variables(lower=3.0, upper=3.1, coords=model.get_coords(['period', 'scenario'])),
+ )
+ assert_var_equal(
+ model.variables['Effect1(periodic)'],
+ model.add_variables(lower=2.0, upper=2.1, coords=model.get_coords(['period', 'scenario'])),
+ )
+ assert_var_equal(
+ model.variables['Effect1(temporal)'],
+ model.add_variables(lower=1.0, upper=1.1, coords=model.get_coords(['period', 'scenario'])),
+ )
+ assert_var_equal(
+ model.variables['Effect1(temporal)|per_timestep'],
model.add_variables(
- lower=4.0 * model.hours_per_step, upper=4.1 * model.hours_per_step, coords=(timesteps,)
+ lower=4.0 * model.hours_per_step,
+ upper=4.1 * model.hours_per_step,
+ coords=model.get_coords(['time', 'period', 'scenario']),
),
)
assert_conequal(
- model.constraints['Effect1|total'],
- model.variables['Effect1|total']
- == model.variables['Effect1(operation)|total'] + model.variables['Effect1(invest)|total'],
+ model.constraints['Effect1'],
+ model.variables['Effect1'] == model.variables['Effect1(temporal)'] + model.variables['Effect1(periodic)'],
)
- assert_conequal(model.constraints['Effect1(invest)|total'], model.variables['Effect1(invest)|total'] == 0)
+ # In minimal/bounds tests with no contributing components, periodic totals should be zero
+ assert_conequal(model.constraints['Effect1(periodic)'], model.variables['Effect1(periodic)'] == 0)
assert_conequal(
- model.constraints['Effect1(operation)|total'],
- model.variables['Effect1(operation)|total']
- == model.variables['Effect1(operation)|total_per_timestep'].sum(),
+ model.constraints['Effect1(temporal)'],
+ model.variables['Effect1(temporal)'] == model.variables['Effect1(temporal)|per_timestep'].sum('time'),
)
assert_conequal(
- model.constraints['Effect1(operation)|total_per_timestep'],
- model.variables['Effect1(operation)|total_per_timestep'] == 0,
+ model.constraints['Effect1(temporal)|per_timestep'],
+ model.variables['Effect1(temporal)|per_timestep'] == 0,
)
- def test_shares(self, basic_flow_system_linopy):
- flow_system = basic_flow_system_linopy
+ def test_shares(self, basic_flow_system_linopy_coords, coords_config):
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
effect1 = fx.Effect(
'Effect1',
'€',
'Testing Effect',
- specific_share_to_other_effects_operation={'Effect2': 1.1, 'Effect3': 1.2},
- specific_share_to_other_effects_invest={'Effect2': 2.1, 'Effect3': 2.2},
)
- effect2 = fx.Effect('Effect2', '€', 'Testing Effect')
- effect3 = fx.Effect('Effect3', '€', 'Testing Effect')
+ effect2 = fx.Effect(
+ 'Effect2',
+ '€',
+ 'Testing Effect',
+ share_from_temporal={'Effect1': 1.1},
+ share_from_periodic={'Effect1': 2.1},
+ )
+ effect3 = fx.Effect(
+ 'Effect3',
+ '€',
+ 'Testing Effect',
+ share_from_temporal={'Effect1': 1.2},
+ share_from_periodic={'Effect1': 2.2},
+ )
flow_system.add_elements(effect1, effect2, effect3)
model = create_linopy_model(flow_system)
- assert set(effect2.model.variables) == {
- 'Effect2(invest)|total',
- 'Effect2(operation)|total',
- 'Effect2(operation)|total_per_timestep',
- 'Effect2|total',
- 'Effect1(invest)->Effect2(invest)',
- 'Effect1(operation)->Effect2(operation)',
- }
- assert set(effect2.model.constraints) == {
- 'Effect2(invest)|total',
- 'Effect2(operation)|total',
- 'Effect2(operation)|total_per_timestep',
- 'Effect2|total',
- 'Effect1(invest)->Effect2(invest)',
- 'Effect1(operation)->Effect2(operation)',
- }
+ assert_sets_equal(
+ set(effect2.submodel.variables),
+ {
+ 'Effect2(periodic)',
+ 'Effect2(temporal)',
+ 'Effect2(temporal)|per_timestep',
+ 'Effect2',
+ 'Effect1(periodic)->Effect2(periodic)',
+ 'Effect1(temporal)->Effect2(temporal)',
+ },
+ msg='Incorrect variables for effect2',
+ )
+
+ assert_sets_equal(
+ set(effect2.submodel.constraints),
+ {
+ 'Effect2(periodic)',
+ 'Effect2(temporal)',
+ 'Effect2(temporal)|per_timestep',
+ 'Effect2',
+ 'Effect1(periodic)->Effect2(periodic)',
+ 'Effect1(temporal)->Effect2(temporal)',
+ },
+ msg='Incorrect constraints for effect2',
+ )
assert_conequal(
- model.constraints['Effect2(invest)|total'],
- model.variables['Effect2(invest)|total'] == model.variables['Effect1(invest)->Effect2(invest)'],
+ model.constraints['Effect2(periodic)'],
+ model.variables['Effect2(periodic)'] == model.variables['Effect1(periodic)->Effect2(periodic)'],
)
assert_conequal(
- model.constraints['Effect2(operation)|total_per_timestep'],
- model.variables['Effect2(operation)|total_per_timestep']
- == model.variables['Effect1(operation)->Effect2(operation)'],
+ model.constraints['Effect2(temporal)|per_timestep'],
+ model.variables['Effect2(temporal)|per_timestep']
+ == model.variables['Effect1(temporal)->Effect2(temporal)'],
)
assert_conequal(
- model.constraints['Effect1(operation)->Effect2(operation)'],
- model.variables['Effect1(operation)->Effect2(operation)']
- == model.variables['Effect1(operation)|total_per_timestep'] * 1.1,
+ model.constraints['Effect1(temporal)->Effect2(temporal)'],
+ model.variables['Effect1(temporal)->Effect2(temporal)']
+ == model.variables['Effect1(temporal)|per_timestep'] * 1.1,
)
assert_conequal(
- model.constraints['Effect1(invest)->Effect2(invest)'],
- model.variables['Effect1(invest)->Effect2(invest)'] == model.variables['Effect1(invest)|total'] * 2.1,
+ model.constraints['Effect1(periodic)->Effect2(periodic)'],
+ model.variables['Effect1(periodic)->Effect2(periodic)'] == model.variables['Effect1(periodic)'] * 2.1,
+ )
+
+
+class TestEffectResults:
+ def test_shares(self, basic_flow_system_linopy_coords, coords_config):
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
+ effect1 = fx.Effect('Effect1', '€', 'Testing Effect', share_from_temporal={'costs': 0.5})
+ effect2 = fx.Effect(
+ 'Effect2',
+ '€',
+ 'Testing Effect',
+ share_from_temporal={'Effect1': 1.1},
+ share_from_periodic={'Effect1': 2.1},
+ )
+ effect3 = fx.Effect(
+ 'Effect3',
+ '€',
+ 'Testing Effect',
+ share_from_temporal={'Effect1': 1.2, 'Effect2': 5},
+ share_from_periodic={'Effect1': 2.2},
+ )
+ flow_system.add_elements(
+ effect1,
+ effect2,
+ effect3,
+ fx.linear_converters.Boiler(
+ 'Boiler',
+ eta=0.5,
+ Q_th=fx.Flow(
+ 'Q_th',
+ bus='Fernwärme',
+ size=fx.InvestParameters(effects_of_investment_per_size=10, minimum_size=20, mandatory=True),
+ ),
+ Q_fu=fx.Flow('Q_fu', bus='Gas'),
+ ),
+ )
+
+ results = create_calculation_and_solve(flow_system, fx.solvers.HighsSolver(0.01, 60), 'Sim1').results
+
+ effect_share_factors = {
+ 'temporal': {
+ ('costs', 'Effect1'): 0.5,
+ ('costs', 'Effect2'): 0.5 * 1.1,
+ ('costs', 'Effect3'): 0.5 * 1.1 * 5 + 0.5 * 1.2, # This is where the issue lies
+ ('Effect1', 'Effect2'): 1.1,
+ ('Effect1', 'Effect3'): 1.2 + 1.1 * 5,
+ ('Effect2', 'Effect3'): 5,
+ },
+ 'periodic': {
+ ('Effect1', 'Effect2'): 2.1,
+ ('Effect1', 'Effect3'): 2.2,
+ },
+ }
+ for key, value in effect_share_factors['temporal'].items():
+ np.testing.assert_allclose(results.effect_share_factors['temporal'][key].values, value)
+
+ for key, value in effect_share_factors['periodic'].items():
+ np.testing.assert_allclose(results.effect_share_factors['periodic'][key].values, value)
+
+ xr.testing.assert_allclose(
+ results.effects_per_component['temporal'].sum('component').sel(effect='costs', drop=True),
+ results.solution['costs(temporal)|per_timestep'].fillna(0),
+ )
+
+ xr.testing.assert_allclose(
+ results.effects_per_component['temporal'].sum('component').sel(effect='Effect1', drop=True),
+ results.solution['Effect1(temporal)|per_timestep'].fillna(0),
+ )
+
+ xr.testing.assert_allclose(
+ results.effects_per_component['temporal'].sum('component').sel(effect='Effect2', drop=True),
+ results.solution['Effect2(temporal)|per_timestep'].fillna(0),
+ )
+
+ xr.testing.assert_allclose(
+ results.effects_per_component['temporal'].sum('component').sel(effect='Effect3', drop=True),
+ results.solution['Effect3(temporal)|per_timestep'].fillna(0),
+ )
+
+ # periodic mode checks
+ xr.testing.assert_allclose(
+ results.effects_per_component['periodic'].sum('component').sel(effect='costs', drop=True),
+ results.solution['costs(periodic)'],
+ )
+
+ xr.testing.assert_allclose(
+ results.effects_per_component['periodic'].sum('component').sel(effect='Effect1', drop=True),
+ results.solution['Effect1(periodic)'],
+ )
+
+ xr.testing.assert_allclose(
+ results.effects_per_component['periodic'].sum('component').sel(effect='Effect2', drop=True),
+ results.solution['Effect2(periodic)'],
+ )
+
+ xr.testing.assert_allclose(
+ results.effects_per_component['periodic'].sum('component').sel(effect='Effect3', drop=True),
+ results.solution['Effect3(periodic)'],
+ )
+
+ # Total mode checks
+ xr.testing.assert_allclose(
+ results.effects_per_component['total'].sum('component').sel(effect='costs', drop=True),
+ results.solution['costs'],
+ )
+
+ xr.testing.assert_allclose(
+ results.effects_per_component['total'].sum('component').sel(effect='Effect1', drop=True),
+ results.solution['Effect1'],
+ )
+
+ xr.testing.assert_allclose(
+ results.effects_per_component['total'].sum('component').sel(effect='Effect2', drop=True),
+ results.solution['Effect2'],
+ )
+
+ xr.testing.assert_allclose(
+ results.effects_per_component['total'].sum('component').sel(effect='Effect3', drop=True),
+ results.solution['Effect3'],
)
diff --git a/tests/test_effects_shares_summation.py b/tests/test_effects_shares_summation.py
new file mode 100644
index 000000000..312934732
--- /dev/null
+++ b/tests/test_effects_shares_summation.py
@@ -0,0 +1,225 @@
+import pytest
+import xarray as xr
+
+from flixopt.effects import calculate_all_conversion_paths
+
+
+def test_direct_conversions():
+ """Test direct conversions with simple scalar values."""
+ conversion_dict = {'A': {'B': xr.DataArray(2.0)}, 'B': {'C': xr.DataArray(3.0)}}
+
+ result = calculate_all_conversion_paths(conversion_dict)
+
+ # Check direct conversions
+ assert ('A', 'B') in result
+ assert ('B', 'C') in result
+ assert result[('A', 'B')].item() == 2.0
+ assert result[('B', 'C')].item() == 3.0
+
+ # Check indirect conversion
+ assert ('A', 'C') in result
+ assert result[('A', 'C')].item() == 6.0 # 2.0 * 3.0
+
+
+def test_multiple_paths():
+ """Test multiple paths between nodes that should be summed."""
+ conversion_dict = {
+ 'A': {'B': xr.DataArray(2.0), 'C': xr.DataArray(3.0)},
+ 'B': {'D': xr.DataArray(4.0)},
+ 'C': {'D': xr.DataArray(5.0)},
+ }
+
+ result = calculate_all_conversion_paths(conversion_dict)
+
+ # A to D should sum two paths: A->B->D (2*4=8) and A->C->D (3*5=15)
+ assert ('A', 'D') in result
+ assert result[('A', 'D')].item() == 8.0 + 15.0
+
+
+def test_xarray_conversions():
+ """Test with xarray DataArrays that have dimensions."""
+ # Create DataArrays with a time dimension
+ time_points = [1, 2, 3]
+ a_to_b = xr.DataArray([2.0, 2.1, 2.2], dims='time', coords={'time': time_points})
+ b_to_c = xr.DataArray([3.0, 3.1, 3.2], dims='time', coords={'time': time_points})
+
+ conversion_dict = {'A': {'B': a_to_b}, 'B': {'C': b_to_c}}
+
+ result = calculate_all_conversion_paths(conversion_dict)
+
+ # Check indirect conversion preserves dimensions
+ assert ('A', 'C') in result
+ assert result[('A', 'C')].dims == ('time',)
+
+ # Check values at each time point
+ for i, t in enumerate(time_points):
+ expected = a_to_b.values[i] * b_to_c.values[i]
+ assert pytest.approx(result[('A', 'C')].sel(time=t).item()) == expected
+
+
+def test_long_paths():
+ """Test with longer paths (more than one intermediate node)."""
+ conversion_dict = {
+ 'A': {'B': xr.DataArray(2.0)},
+ 'B': {'C': xr.DataArray(3.0)},
+ 'C': {'D': xr.DataArray(4.0)},
+ 'D': {'E': xr.DataArray(5.0)},
+ }
+
+ result = calculate_all_conversion_paths(conversion_dict)
+
+ # Check the full path A->B->C->D->E
+ assert ('A', 'E') in result
+ expected = 2.0 * 3.0 * 4.0 * 5.0 # 120.0
+ assert result[('A', 'E')].item() == expected
+
+
+def test_diamond_paths():
+ """Test with a diamond shape graph with multiple paths to the same destination."""
+ conversion_dict = {
+ 'A': {'B': xr.DataArray(2.0), 'C': xr.DataArray(3.0)},
+ 'B': {'D': xr.DataArray(4.0)},
+ 'C': {'D': xr.DataArray(5.0)},
+ 'D': {'E': xr.DataArray(6.0)},
+ }
+
+ result = calculate_all_conversion_paths(conversion_dict)
+
+ # A to E should go through both paths:
+ # A->B->D->E (2*4*6=48) and A->C->D->E (3*5*6=90)
+ assert ('A', 'E') in result
+ expected = 48.0 + 90.0 # 138.0
+ assert result[('A', 'E')].item() == expected
+
+
+def test_effect_shares_example():
+ """Test the specific example from the effects share factors test."""
+ # Create the conversion dictionary based on test example
+ conversion_dict = {
+ 'costs': {'Effect1': xr.DataArray(0.5)},
+ 'Effect1': {'Effect2': xr.DataArray(1.1), 'Effect3': xr.DataArray(1.2)},
+ 'Effect2': {'Effect3': xr.DataArray(5.0)},
+ }
+
+ result = calculate_all_conversion_paths(conversion_dict)
+
+ # Test direct paths
+ assert result[('costs', 'Effect1')].item() == 0.5
+ assert result[('Effect1', 'Effect2')].item() == 1.1
+ assert result[('Effect2', 'Effect3')].item() == 5.0
+
+ # Test indirect paths
+ # costs -> Effect2 = costs -> Effect1 -> Effect2 = 0.5 * 1.1
+ assert result[('costs', 'Effect2')].item() == 0.5 * 1.1
+
+ # costs -> Effect3 has two paths:
+ # 1. costs -> Effect1 -> Effect3 = 0.5 * 1.2 = 0.6
+ # 2. costs -> Effect1 -> Effect2 -> Effect3 = 0.5 * 1.1 * 5 = 2.75
+ # Total = 0.6 + 2.75 = 3.35
+ assert result[('costs', 'Effect3')].item() == 0.5 * 1.2 + 0.5 * 1.1 * 5
+
+ # Effect1 -> Effect3 has two paths:
+ # 1. Effect1 -> Effect2 -> Effect3 = 1.1 * 5.0 = 5.5
+ # 2. Effect1 -> Effect3 = 1.2
+ # Total = 0.6 + 2.75 = 3.35
+ assert result[('Effect1', 'Effect3')].item() == 1.2 + 1.1 * 5.0
+
+
+def test_empty_conversion_dict():
+ """Test with an empty conversion dictionary."""
+ result = calculate_all_conversion_paths({})
+ assert len(result) == 0
+
+
+def test_no_indirect_paths():
+ """Test with a dictionary that has no indirect paths."""
+ conversion_dict = {'A': {'B': xr.DataArray(2.0)}, 'C': {'D': xr.DataArray(3.0)}}
+
+ result = calculate_all_conversion_paths(conversion_dict)
+
+ # Only direct paths should exist
+ assert len(result) == 2
+ assert ('A', 'B') in result
+ assert ('C', 'D') in result
+ assert result[('A', 'B')].item() == 2.0
+ assert result[('C', 'D')].item() == 3.0
+
+
+def test_complex_network():
+ """Test with a complex network of many nodes and multiple paths, without circular references."""
+ # Create a directed acyclic graph with many nodes
+ # Structure resembles a layered network with multiple paths
+ conversion_dict = {
+ 'A': {'B': xr.DataArray(1.5), 'C': xr.DataArray(2.0), 'D': xr.DataArray(0.5)},
+ 'B': {'E': xr.DataArray(3.0), 'F': xr.DataArray(1.2)},
+ 'C': {'E': xr.DataArray(0.8), 'G': xr.DataArray(2.5)},
+ 'D': {'G': xr.DataArray(1.8), 'H': xr.DataArray(3.2)},
+ 'E': {'I': xr.DataArray(0.7), 'J': xr.DataArray(1.4)},
+ 'F': {'J': xr.DataArray(2.2), 'K': xr.DataArray(0.9)},
+ 'G': {'K': xr.DataArray(1.6), 'L': xr.DataArray(2.8)},
+ 'H': {'L': xr.DataArray(0.4), 'M': xr.DataArray(1.1)},
+ 'I': {'N': xr.DataArray(2.3)},
+ 'J': {'N': xr.DataArray(1.9), 'O': xr.DataArray(0.6)},
+ 'K': {'O': xr.DataArray(3.5), 'P': xr.DataArray(1.3)},
+ 'L': {'P': xr.DataArray(2.7), 'Q': xr.DataArray(0.8)},
+ 'M': {'Q': xr.DataArray(2.1)},
+ 'N': {'R': xr.DataArray(1.7)},
+ 'O': {'R': xr.DataArray(2.9), 'S': xr.DataArray(1.0)},
+ 'P': {'S': xr.DataArray(2.4)},
+ 'Q': {'S': xr.DataArray(1.5)},
+ }
+
+ result = calculate_all_conversion_paths(conversion_dict)
+
+ # Check some direct paths
+ assert result[('A', 'B')].item() == 1.5
+ assert result[('D', 'H')].item() == 3.2
+ assert result[('G', 'L')].item() == 2.8
+
+ # Check some two-step paths
+ assert result[('A', 'E')].item() == 1.5 * 3.0 + 2.0 * 0.8 # A->B->E + A->C->E
+ assert result[('B', 'J')].item() == 3.0 * 1.4 + 1.2 * 2.2 # B->E->J + B->F->J
+
+ # Check some three-step paths
+ # A->B->E->I
+ # A->C->E->I
+ expected_a_to_i = 1.5 * 3.0 * 0.7 + 2.0 * 0.8 * 0.7
+ assert pytest.approx(result[('A', 'I')].item()) == expected_a_to_i
+
+ # Check some four-step paths
+ # A->B->E->I->N
+ # A->C->E->I->N
+ expected_a_to_n = 1.5 * 3.0 * 0.7 * 2.3 + 2.0 * 0.8 * 0.7 * 2.3
+ expected_a_to_n += 1.5 * 3.0 * 1.4 * 1.9 + 2.0 * 0.8 * 1.4 * 1.9 # A->B->E->J->N + A->C->E->J->N
+ expected_a_to_n += 1.5 * 1.2 * 2.2 * 1.9 # A->B->F->J->N
+ assert pytest.approx(result[('A', 'N')].item()) == expected_a_to_n
+
+ # Check a very long path from A to S
+ # This should include:
+ # A->B->E->J->O->S
+ # A->B->F->K->O->S
+ # A->C->E->J->O->S
+ # A->C->G->K->O->S
+ # A->D->G->K->O->S
+ # A->D->H->L->P->S
+ # A->D->H->M->Q->S
+ # And many more
+ assert ('A', 'S') in result
+
+ # There are many paths to R from A - check their existence
+ assert ('A', 'R') in result
+
+ # Check that there's no direct path from A to R
+ # But there should be indirect paths
+ assert ('A', 'R') in result
+ assert 'A' not in conversion_dict.get('R', {})
+
+ # Count the number of paths calculated to verify algorithm explored all connections
+ # In a DAG with 19 nodes (A through S), the maximum number of pairs is 19*18 = 342
+ # But we won't have all possible connections due to the structure
+ # Just verify we have a reasonable number
+ assert len(result) > 50
+
+
+if __name__ == '__main__':
+ pytest.main()
diff --git a/tests/test_examples.py b/tests/test_examples.py
index f29ea66b6..eca79d7c7 100644
--- a/tests/test_examples.py
+++ b/tests/test_examples.py
@@ -1,6 +1,7 @@
import os
import subprocess
import sys
+from contextlib import contextmanager
from pathlib import Path
import pytest
@@ -8,42 +9,78 @@
# Path to the examples directory
EXAMPLES_DIR = Path(__file__).parent.parent / 'examples'
+# Examples that have dependencies and must run in sequence
+DEPENDENT_EXAMPLES = (
+ '02_Complex/complex_example.py',
+ '02_Complex/complex_example_results.py',
+)
+
+
+@contextmanager
+def working_directory(path):
+ """Context manager for changing the working directory."""
+ original_cwd = os.getcwd()
+ try:
+ os.chdir(path)
+ yield
+ finally:
+ os.chdir(original_cwd)
+
@pytest.mark.parametrize(
'example_script',
sorted(
- EXAMPLES_DIR.rglob('*.py'), key=lambda path: (str(path.parent), path.name)
- ), # Sort by parent and script name
- ids=lambda path: str(path.relative_to(EXAMPLES_DIR)), # Show relative file paths
+ [p for p in EXAMPLES_DIR.rglob('*.py') if str(p.relative_to(EXAMPLES_DIR)) not in DEPENDENT_EXAMPLES],
+ key=lambda path: (str(path.parent), path.name),
+ ),
+ ids=lambda path: str(path.relative_to(EXAMPLES_DIR)).replace(os.sep, '/'),
)
@pytest.mark.examples
-def test_example_scripts(example_script):
+def test_independent_examples(example_script):
"""
- Test all example scripts in the examples directory.
+ Test independent example scripts.
Ensures they run without errors.
Changes the current working directory to the directory of the example script.
Runs them alphabetically.
- This imitates behaviour of running the script directly
+ This imitates behaviour of running the script directly.
"""
- script_dir = example_script.parent
- original_cwd = os.getcwd()
+ with working_directory(example_script.parent):
+ timeout = 600
+ try:
+ result = subprocess.run(
+ [sys.executable, example_script.name],
+ capture_output=True,
+ text=True,
+ timeout=timeout,
+ )
+ except subprocess.TimeoutExpired:
+ pytest.fail(f'Script {example_script} timed out after {timeout} seconds')
- try:
- # Change the working directory to the script's location
- os.chdir(script_dir)
-
- # Run the script
- result = subprocess.run(
- [sys.executable, example_script.name],
- capture_output=True,
- text=True,
+ assert result.returncode == 0, (
+ f'Script {example_script} failed:\nSTDERR:\n{result.stderr}\nSTDOUT:\n{result.stdout}'
)
- assert result.returncode == 0, f'Script {example_script} failed:\n{result.stderr}'
- finally:
- # Restore the original working directory
- os.chdir(original_cwd)
+
+@pytest.mark.examples
+def test_dependent_examples():
+ """Test examples that must run in order (complex_example.py generates data for complex_example_results.py)."""
+ for script_path in DEPENDENT_EXAMPLES:
+ script_full_path = EXAMPLES_DIR / script_path
+
+ with working_directory(script_full_path.parent):
+ timeout = 600
+ try:
+ result = subprocess.run(
+ [sys.executable, script_full_path.name],
+ capture_output=True,
+ text=True,
+ timeout=timeout,
+ )
+ except subprocess.TimeoutExpired:
+ pytest.fail(f'Script {script_path} timed out after {timeout} seconds')
+
+ assert result.returncode == 0, f'{script_path} failed:\nSTDERR:\n{result.stderr}\nSTDOUT:\n{result.stdout}'
if __name__ == '__main__':
- pytest.main(['-v', '--disable-warnings'])
+ pytest.main(['-v', '--disable-warnings', '-m', 'examples'])
diff --git a/tests/test_flow.py b/tests/test_flow.py
index f7c5d8a69..8a011939f 100644
--- a/tests/test_flow.py
+++ b/tests/test_flow.py
@@ -4,36 +4,44 @@
import flixopt as fx
-from .conftest import assert_conequal, assert_var_equal, create_linopy_model
+from .conftest import assert_conequal, assert_sets_equal, assert_var_equal, create_linopy_model
class TestFlowModel:
"""Test the FlowModel class."""
- def test_flow_minimal(self, basic_flow_system_linopy):
+ def test_flow_minimal(self, basic_flow_system_linopy_coords, coords_config):
"""Test that flow model constraints are correctly generated."""
- flow_system = basic_flow_system_linopy
- timesteps = flow_system.time_series_collection.timesteps
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
+
flow = fx.Flow('Wärme', bus='Fernwärme', size=100)
- flow_system.add_elements(fx.Sink('Sink', sink=flow))
+ flow_system.add_elements(fx.Sink('Sink', inputs=[flow]))
model = create_linopy_model(flow_system)
assert_conequal(
model.constraints['Sink(Wärme)|total_flow_hours'],
- flow.model.variables['Sink(Wärme)|total_flow_hours']
- == (flow.model.variables['Sink(Wärme)|flow_rate'] * model.hours_per_step).sum(),
+ flow.submodel.variables['Sink(Wärme)|total_flow_hours']
+ == (flow.submodel.variables['Sink(Wärme)|flow_rate'] * model.hours_per_step).sum('time'),
+ )
+ assert_var_equal(flow.submodel.flow_rate, model.add_variables(lower=0, upper=100, coords=model.get_coords()))
+ assert_var_equal(
+ flow.submodel.total_flow_hours,
+ model.add_variables(lower=0, coords=model.get_coords(['period', 'scenario'])),
+ )
+
+ assert_sets_equal(
+ set(flow.submodel.variables),
+ {'Sink(Wärme)|total_flow_hours', 'Sink(Wärme)|flow_rate'},
+ msg='Incorrect variables',
)
- assert_var_equal(flow.model.flow_rate, model.add_variables(lower=0, upper=100, coords=(timesteps,)))
- assert_var_equal(flow.model.total_flow_hours, model.add_variables(lower=0))
+ assert_sets_equal(set(flow.submodel.constraints), {'Sink(Wärme)|total_flow_hours'}, msg='Incorrect constraints')
- assert set(flow.model.variables) == set(['Sink(Wärme)|total_flow_hours', 'Sink(Wärme)|flow_rate'])
- assert set(flow.model.constraints) == set(['Sink(Wärme)|total_flow_hours'])
+ def test_flow(self, basic_flow_system_linopy_coords, coords_config):
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
+ timesteps = flow_system.timesteps
- def test_flow(self, basic_flow_system_linopy):
- flow_system = basic_flow_system_linopy
- timesteps = flow_system.time_series_collection.timesteps
flow = fx.Flow(
'Wärme',
bus='Fernwärme',
@@ -46,342 +54,400 @@ def test_flow(self, basic_flow_system_linopy):
load_factor_max=0.9,
)
- flow_system.add_elements(fx.Sink('Sink', sink=flow))
+ flow_system.add_elements(fx.Sink('Sink', inputs=[flow]))
model = create_linopy_model(flow_system)
# total_flow_hours
assert_conequal(
model.constraints['Sink(Wärme)|total_flow_hours'],
- flow.model.variables['Sink(Wärme)|total_flow_hours']
- == (flow.model.variables['Sink(Wärme)|flow_rate'] * model.hours_per_step).sum(),
+ flow.submodel.variables['Sink(Wärme)|total_flow_hours']
+ == (flow.submodel.variables['Sink(Wärme)|flow_rate'] * model.hours_per_step).sum('time'),
)
- assert_var_equal(flow.model.total_flow_hours, model.add_variables(lower=10, upper=1000))
+ assert_var_equal(
+ flow.submodel.total_flow_hours,
+ model.add_variables(lower=10, upper=1000, coords=model.get_coords(['period', 'scenario'])),
+ )
+
+ assert flow.relative_minimum.dims == tuple(model.get_coords())
+ assert flow.relative_maximum.dims == tuple(model.get_coords())
assert_var_equal(
- flow.model.flow_rate,
+ flow.submodel.flow_rate,
model.add_variables(
- lower=np.linspace(0, 0.5, timesteps.size) * 100,
- upper=np.linspace(0.5, 1, timesteps.size) * 100,
- coords=(timesteps,),
+ lower=flow.relative_minimum * 100,
+ upper=flow.relative_maximum * 100,
+ coords=model.get_coords(),
),
)
assert_conequal(
model.constraints['Sink(Wärme)|load_factor_min'],
- flow.model.variables['Sink(Wärme)|total_flow_hours'] >= model.hours_per_step.sum('time') * 0.1 * 100,
+ flow.submodel.variables['Sink(Wärme)|total_flow_hours'] >= model.hours_per_step.sum('time') * 0.1 * 100,
)
assert_conequal(
model.constraints['Sink(Wärme)|load_factor_max'],
- flow.model.variables['Sink(Wärme)|total_flow_hours'] <= model.hours_per_step.sum('time') * 0.9 * 100,
+ flow.submodel.variables['Sink(Wärme)|total_flow_hours'] <= model.hours_per_step.sum('time') * 0.9 * 100,
)
- assert set(flow.model.variables) == set(['Sink(Wärme)|total_flow_hours', 'Sink(Wärme)|flow_rate'])
- assert set(flow.model.constraints) == set(
- ['Sink(Wärme)|total_flow_hours', 'Sink(Wärme)|load_factor_max', 'Sink(Wärme)|load_factor_min']
+ assert_sets_equal(
+ set(flow.submodel.variables),
+ {'Sink(Wärme)|total_flow_hours', 'Sink(Wärme)|flow_rate'},
+ msg='Incorrect variables',
+ )
+ assert_sets_equal(
+ set(flow.submodel.constraints),
+ {'Sink(Wärme)|total_flow_hours', 'Sink(Wärme)|load_factor_max', 'Sink(Wärme)|load_factor_min'},
+ msg='Incorrect constraints',
)
- def test_effects_per_flow_hour(self, basic_flow_system_linopy):
- flow_system = basic_flow_system_linopy
- timesteps = flow_system.time_series_collection.timesteps
+ def test_effects_per_flow_hour(self, basic_flow_system_linopy_coords, coords_config):
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
+ timesteps = flow_system.timesteps
costs_per_flow_hour = xr.DataArray(np.linspace(1, 2, timesteps.size), coords=(timesteps,))
co2_per_flow_hour = xr.DataArray(np.linspace(4, 5, timesteps.size), coords=(timesteps,))
flow = fx.Flow(
- 'Wärme', bus='Fernwärme', effects_per_flow_hour={'Costs': costs_per_flow_hour, 'CO2': co2_per_flow_hour}
+ 'Wärme', bus='Fernwärme', effects_per_flow_hour={'costs': costs_per_flow_hour, 'CO2': co2_per_flow_hour}
)
- flow_system.add_elements(fx.Sink('Sink', sink=flow), fx.Effect('CO2', 't', ''))
+ flow_system.add_elements(fx.Sink('Sink', inputs=[flow]), fx.Effect('CO2', 't', ''))
model = create_linopy_model(flow_system)
- costs, co2 = flow_system.effects['Costs'], flow_system.effects['CO2']
+ costs, co2 = flow_system.effects['costs'], flow_system.effects['CO2']
- assert set(flow.model.variables) == {'Sink(Wärme)|total_flow_hours', 'Sink(Wärme)|flow_rate'}
- assert set(flow.model.constraints) == {'Sink(Wärme)|total_flow_hours'}
+ assert_sets_equal(
+ set(flow.submodel.variables),
+ {'Sink(Wärme)|total_flow_hours', 'Sink(Wärme)|flow_rate'},
+ msg='Incorrect variables',
+ )
+ assert_sets_equal(set(flow.submodel.constraints), {'Sink(Wärme)|total_flow_hours'}, msg='Incorrect constraints')
- assert 'Sink(Wärme)->Costs(operation)' in set(costs.model.constraints)
- assert 'Sink(Wärme)->CO2(operation)' in set(co2.model.constraints)
+ assert 'Sink(Wärme)->costs(temporal)' in set(costs.submodel.constraints)
+ assert 'Sink(Wärme)->CO2(temporal)' in set(co2.submodel.constraints)
assert_conequal(
- model.constraints['Sink(Wärme)->Costs(operation)'],
- model.variables['Sink(Wärme)->Costs(operation)']
- == flow.model.variables['Sink(Wärme)|flow_rate'] * model.hours_per_step * costs_per_flow_hour,
+ model.constraints['Sink(Wärme)->costs(temporal)'],
+ model.variables['Sink(Wärme)->costs(temporal)']
+ == flow.submodel.variables['Sink(Wärme)|flow_rate'] * model.hours_per_step * costs_per_flow_hour,
)
assert_conequal(
- model.constraints['Sink(Wärme)->CO2(operation)'],
- model.variables['Sink(Wärme)->CO2(operation)']
- == flow.model.variables['Sink(Wärme)|flow_rate'] * model.hours_per_step * co2_per_flow_hour,
+ model.constraints['Sink(Wärme)->CO2(temporal)'],
+ model.variables['Sink(Wärme)->CO2(temporal)']
+ == flow.submodel.variables['Sink(Wärme)|flow_rate'] * model.hours_per_step * co2_per_flow_hour,
)
class TestFlowInvestModel:
"""Test the FlowModel class."""
- def test_flow_invest(self, basic_flow_system_linopy):
- flow_system = basic_flow_system_linopy
- timesteps = flow_system.time_series_collection.timesteps
+ def test_flow_invest(self, basic_flow_system_linopy_coords, coords_config):
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
+ timesteps = flow_system.timesteps
flow = fx.Flow(
'Wärme',
bus='Fernwärme',
- size=fx.InvestParameters(minimum_size=20, maximum_size=100, optional=False),
+ size=fx.InvestParameters(minimum_size=20, maximum_size=100, mandatory=True),
relative_minimum=np.linspace(0.1, 0.5, timesteps.size),
relative_maximum=np.linspace(0.5, 1, timesteps.size),
)
- flow_system.add_elements(fx.Sink('Sink', sink=flow))
+ flow_system.add_elements(fx.Sink('Sink', inputs=[flow]))
model = create_linopy_model(flow_system)
- assert set(flow.model.variables) == set(
- [
+ assert_sets_equal(
+ set(flow.submodel.variables),
+ {
'Sink(Wärme)|total_flow_hours',
'Sink(Wärme)|flow_rate',
'Sink(Wärme)|size',
- ]
+ },
+ msg='Incorrect variables',
)
- assert set(flow.model.constraints) == set(
- [
+ assert_sets_equal(
+ set(flow.submodel.constraints),
+ {
'Sink(Wärme)|total_flow_hours',
- 'Sink(Wärme)|lb_Sink(Wärme)|flow_rate',
- 'Sink(Wärme)|ub_Sink(Wärme)|flow_rate',
- ]
+ 'Sink(Wärme)|flow_rate|ub',
+ 'Sink(Wärme)|flow_rate|lb',
+ },
+ msg='Incorrect constraints',
)
# size
- assert_var_equal(model['Sink(Wärme)|size'], model.add_variables(lower=20, upper=100))
+ assert_var_equal(
+ model['Sink(Wärme)|size'],
+ model.add_variables(lower=20, upper=100, coords=model.get_coords(['period', 'scenario'])),
+ )
+
+ assert flow.relative_minimum.dims == tuple(model.get_coords())
+ assert flow.relative_maximum.dims == tuple(model.get_coords())
# flow_rate
assert_var_equal(
- flow.model.flow_rate,
+ flow.submodel.flow_rate,
model.add_variables(
- lower=np.linspace(0.1, 0.5, timesteps.size) * 20,
- upper=np.linspace(0.5, 1, timesteps.size) * 100,
- coords=(timesteps,),
+ lower=flow.relative_minimum * 20,
+ upper=flow.relative_maximum * 100,
+ coords=model.get_coords(),
),
)
assert_conequal(
- model.constraints['Sink(Wärme)|lb_Sink(Wärme)|flow_rate'],
- flow.model.variables['Sink(Wärme)|flow_rate']
- >= flow.model.variables['Sink(Wärme)|size']
- * xr.DataArray(np.linspace(0.1, 0.5, timesteps.size), coords=(timesteps,)),
+ model.constraints['Sink(Wärme)|flow_rate|lb'],
+ flow.submodel.variables['Sink(Wärme)|flow_rate']
+ >= flow.submodel.variables['Sink(Wärme)|size'] * flow.relative_minimum,
)
assert_conequal(
- model.constraints['Sink(Wärme)|ub_Sink(Wärme)|flow_rate'],
- flow.model.variables['Sink(Wärme)|flow_rate']
- <= flow.model.variables['Sink(Wärme)|size']
- * xr.DataArray(np.linspace(0.5, 1, timesteps.size), coords=(timesteps,)),
+ model.constraints['Sink(Wärme)|flow_rate|ub'],
+ flow.submodel.variables['Sink(Wärme)|flow_rate']
+ <= flow.submodel.variables['Sink(Wärme)|size'] * flow.relative_maximum,
)
- def test_flow_invest_optional(self, basic_flow_system_linopy):
- flow_system = basic_flow_system_linopy
- timesteps = flow_system.time_series_collection.timesteps
+ def test_flow_invest_optional(self, basic_flow_system_linopy_coords, coords_config):
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
+ timesteps = flow_system.timesteps
flow = fx.Flow(
'Wärme',
bus='Fernwärme',
- size=fx.InvestParameters(minimum_size=20, maximum_size=100, optional=True),
+ size=fx.InvestParameters(minimum_size=20, maximum_size=100, mandatory=False),
relative_minimum=np.linspace(0.1, 0.5, timesteps.size),
relative_maximum=np.linspace(0.5, 1, timesteps.size),
)
- flow_system.add_elements(fx.Sink('Sink', sink=flow))
+ flow_system.add_elements(fx.Sink('Sink', inputs=[flow]))
model = create_linopy_model(flow_system)
- assert set(flow.model.variables) == set(
- ['Sink(Wärme)|total_flow_hours', 'Sink(Wärme)|flow_rate', 'Sink(Wärme)|size', 'Sink(Wärme)|is_invested']
+ assert_sets_equal(
+ set(flow.submodel.variables),
+ {'Sink(Wärme)|total_flow_hours', 'Sink(Wärme)|flow_rate', 'Sink(Wärme)|size', 'Sink(Wärme)|invested'},
+ msg='Incorrect variables',
)
- assert set(flow.model.constraints) == set(
- [
+ assert_sets_equal(
+ set(flow.submodel.constraints),
+ {
'Sink(Wärme)|total_flow_hours',
- 'Sink(Wärme)|is_invested_ub',
- 'Sink(Wärme)|is_invested_lb',
- 'Sink(Wärme)|lb_Sink(Wärme)|flow_rate',
- 'Sink(Wärme)|ub_Sink(Wärme)|flow_rate',
- ]
+ 'Sink(Wärme)|size|lb',
+ 'Sink(Wärme)|size|ub',
+ 'Sink(Wärme)|flow_rate|lb',
+ 'Sink(Wärme)|flow_rate|ub',
+ },
+ msg='Incorrect constraints',
)
- assert_var_equal(model['Sink(Wärme)|size'], model.add_variables(lower=0, upper=100))
+ assert_var_equal(
+ model['Sink(Wärme)|size'],
+ model.add_variables(lower=0, upper=100, coords=model.get_coords(['period', 'scenario'])),
+ )
- assert_var_equal(model['Sink(Wärme)|is_invested'], model.add_variables(binary=True))
+ assert_var_equal(
+ model['Sink(Wärme)|invested'],
+ model.add_variables(binary=True, coords=model.get_coords(['period', 'scenario'])),
+ )
+
+ assert flow.relative_minimum.dims == tuple(model.get_coords())
+ assert flow.relative_maximum.dims == tuple(model.get_coords())
# flow_rate
assert_var_equal(
- flow.model.flow_rate,
+ flow.submodel.flow_rate,
model.add_variables(
lower=0, # Optional investment
- upper=np.linspace(0.5, 1, timesteps.size) * 100,
- coords=(timesteps,),
+ upper=flow.relative_maximum * 100,
+ coords=model.get_coords(),
),
)
assert_conequal(
- model.constraints['Sink(Wärme)|lb_Sink(Wärme)|flow_rate'],
- flow.model.variables['Sink(Wärme)|flow_rate']
- >= flow.model.variables['Sink(Wärme)|size']
- * xr.DataArray(np.linspace(0.1, 0.5, timesteps.size), coords=(timesteps,)),
+ model.constraints['Sink(Wärme)|flow_rate|lb'],
+ flow.submodel.variables['Sink(Wärme)|flow_rate']
+ >= flow.submodel.variables['Sink(Wärme)|size'] * flow.relative_minimum,
)
assert_conequal(
- model.constraints['Sink(Wärme)|ub_Sink(Wärme)|flow_rate'],
- flow.model.variables['Sink(Wärme)|flow_rate']
- <= flow.model.variables['Sink(Wärme)|size']
- * xr.DataArray(np.linspace(0.5, 1, timesteps.size), coords=(timesteps,)),
+ model.constraints['Sink(Wärme)|flow_rate|ub'],
+ flow.submodel.variables['Sink(Wärme)|flow_rate']
+ <= flow.submodel.variables['Sink(Wärme)|size'] * flow.relative_maximum,
)
# Is invested
assert_conequal(
- model.constraints['Sink(Wärme)|is_invested_ub'],
- flow.model.variables['Sink(Wärme)|size'] <= flow.model.variables['Sink(Wärme)|is_invested'] * 100,
+ model.constraints['Sink(Wärme)|size|ub'],
+ flow.submodel.variables['Sink(Wärme)|size'] <= flow.submodel.variables['Sink(Wärme)|invested'] * 100,
)
assert_conequal(
- model.constraints['Sink(Wärme)|is_invested_lb'],
- flow.model.variables['Sink(Wärme)|size'] >= flow.model.variables['Sink(Wärme)|is_invested'] * 20,
+ model.constraints['Sink(Wärme)|size|lb'],
+ flow.submodel.variables['Sink(Wärme)|size'] >= flow.submodel.variables['Sink(Wärme)|invested'] * 20,
)
- def test_flow_invest_optional_wo_min_size(self, basic_flow_system_linopy):
- flow_system = basic_flow_system_linopy
- timesteps = flow_system.time_series_collection.timesteps
+ def test_flow_invest_optional_wo_min_size(self, basic_flow_system_linopy_coords, coords_config):
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
+ timesteps = flow_system.timesteps
flow = fx.Flow(
'Wärme',
bus='Fernwärme',
- size=fx.InvestParameters(maximum_size=100, optional=True),
+ size=fx.InvestParameters(maximum_size=100, mandatory=False),
relative_minimum=np.linspace(0.1, 0.5, timesteps.size),
relative_maximum=np.linspace(0.5, 1, timesteps.size),
)
- flow_system.add_elements(fx.Sink('Sink', sink=flow))
+ flow_system.add_elements(fx.Sink('Sink', inputs=[flow]))
model = create_linopy_model(flow_system)
- assert set(flow.model.variables) == set(
- ['Sink(Wärme)|total_flow_hours', 'Sink(Wärme)|flow_rate', 'Sink(Wärme)|size', 'Sink(Wärme)|is_invested']
+ assert_sets_equal(
+ set(flow.submodel.variables),
+ {'Sink(Wärme)|total_flow_hours', 'Sink(Wärme)|flow_rate', 'Sink(Wärme)|size', 'Sink(Wärme)|invested'},
+ msg='Incorrect variables',
)
- assert set(flow.model.constraints) == set(
- [
+ assert_sets_equal(
+ set(flow.submodel.constraints),
+ {
'Sink(Wärme)|total_flow_hours',
- 'Sink(Wärme)|is_invested_ub',
- 'Sink(Wärme)|is_invested_lb',
- 'Sink(Wärme)|lb_Sink(Wärme)|flow_rate',
- 'Sink(Wärme)|ub_Sink(Wärme)|flow_rate',
- ]
+ 'Sink(Wärme)|size|ub',
+ 'Sink(Wärme)|size|lb',
+ 'Sink(Wärme)|flow_rate|lb',
+ 'Sink(Wärme)|flow_rate|ub',
+ },
+ msg='Incorrect constraints',
)
- assert_var_equal(model['Sink(Wärme)|size'], model.add_variables(lower=0, upper=100))
+ assert_var_equal(
+ model['Sink(Wärme)|size'],
+ model.add_variables(lower=0, upper=100, coords=model.get_coords(['period', 'scenario'])),
+ )
- assert_var_equal(model['Sink(Wärme)|is_invested'], model.add_variables(binary=True))
+ assert_var_equal(
+ model['Sink(Wärme)|invested'],
+ model.add_variables(binary=True, coords=model.get_coords(['period', 'scenario'])),
+ )
+
+ assert flow.relative_minimum.dims == tuple(model.get_coords())
+ assert flow.relative_maximum.dims == tuple(model.get_coords())
# flow_rate
assert_var_equal(
- flow.model.flow_rate,
+ flow.submodel.flow_rate,
model.add_variables(
lower=0, # Optional investment
- upper=np.linspace(0.5, 1, timesteps.size) * 100,
- coords=(timesteps,),
+ upper=flow.relative_maximum * 100,
+ coords=model.get_coords(),
),
)
assert_conequal(
- model.constraints['Sink(Wärme)|lb_Sink(Wärme)|flow_rate'],
- flow.model.variables['Sink(Wärme)|flow_rate']
- >= flow.model.variables['Sink(Wärme)|size']
- * xr.DataArray(np.linspace(0.1, 0.5, timesteps.size), coords=(timesteps,)),
+ model.constraints['Sink(Wärme)|flow_rate|lb'],
+ flow.submodel.variables['Sink(Wärme)|flow_rate']
+ >= flow.submodel.variables['Sink(Wärme)|size'] * flow.relative_minimum,
)
assert_conequal(
- model.constraints['Sink(Wärme)|ub_Sink(Wärme)|flow_rate'],
- flow.model.variables['Sink(Wärme)|flow_rate']
- <= flow.model.variables['Sink(Wärme)|size']
- * xr.DataArray(np.linspace(0.5, 1, timesteps.size), coords=(timesteps,)),
+ model.constraints['Sink(Wärme)|flow_rate|ub'],
+ flow.submodel.variables['Sink(Wärme)|flow_rate']
+ <= flow.submodel.variables['Sink(Wärme)|size'] * flow.relative_maximum,
)
# Is invested
assert_conequal(
- model.constraints['Sink(Wärme)|is_invested_ub'],
- flow.model.variables['Sink(Wärme)|size'] <= flow.model.variables['Sink(Wärme)|is_invested'] * 100,
+ model.constraints['Sink(Wärme)|size|ub'],
+ flow.submodel.variables['Sink(Wärme)|size'] <= flow.submodel.variables['Sink(Wärme)|invested'] * 100,
)
assert_conequal(
- model.constraints['Sink(Wärme)|is_invested_lb'],
- flow.model.variables['Sink(Wärme)|size'] >= flow.model.variables['Sink(Wärme)|is_invested'] * 1e-5,
+ model.constraints['Sink(Wärme)|size|lb'],
+ flow.submodel.variables['Sink(Wärme)|size'] >= flow.submodel.variables['Sink(Wärme)|invested'] * 1e-5,
)
- def test_flow_invest_wo_min_size_non_optional(self, basic_flow_system_linopy):
- flow_system = basic_flow_system_linopy
- timesteps = flow_system.time_series_collection.timesteps
+ def test_flow_invest_wo_min_size_non_optional(self, basic_flow_system_linopy_coords, coords_config):
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
+ timesteps = flow_system.timesteps
flow = fx.Flow(
'Wärme',
bus='Fernwärme',
- size=fx.InvestParameters(maximum_size=100, optional=False),
+ size=fx.InvestParameters(maximum_size=100, mandatory=True),
relative_minimum=np.linspace(0.1, 0.5, timesteps.size),
relative_maximum=np.linspace(0.5, 1, timesteps.size),
)
- flow_system.add_elements(fx.Sink('Sink', sink=flow))
+ flow_system.add_elements(fx.Sink('Sink', inputs=[flow]))
model = create_linopy_model(flow_system)
- assert set(flow.model.variables) == set(
- ['Sink(Wärme)|total_flow_hours', 'Sink(Wärme)|flow_rate', 'Sink(Wärme)|size']
+ assert_sets_equal(
+ set(flow.submodel.variables),
+ {'Sink(Wärme)|total_flow_hours', 'Sink(Wärme)|flow_rate', 'Sink(Wärme)|size'},
+ msg='Incorrect variables',
)
- assert set(flow.model.constraints) == set(
- [
+ assert_sets_equal(
+ set(flow.submodel.constraints),
+ {
'Sink(Wärme)|total_flow_hours',
- 'Sink(Wärme)|lb_Sink(Wärme)|flow_rate',
- 'Sink(Wärme)|ub_Sink(Wärme)|flow_rate',
- ]
+ 'Sink(Wärme)|flow_rate|lb',
+ 'Sink(Wärme)|flow_rate|ub',
+ },
+ msg='Incorrect constraints',
)
- assert_var_equal(model['Sink(Wärme)|size'], model.add_variables(lower=1e-5, upper=100))
+ assert_var_equal(
+ model['Sink(Wärme)|size'],
+ model.add_variables(lower=1e-5, upper=100, coords=model.get_coords(['period', 'scenario'])),
+ )
+
+ assert flow.relative_minimum.dims == tuple(model.get_coords())
+ assert flow.relative_maximum.dims == tuple(model.get_coords())
# flow_rate
assert_var_equal(
- flow.model.flow_rate,
+ flow.submodel.flow_rate,
model.add_variables(
- lower=np.linspace(0.1, 0.5, timesteps.size) * 1e-5,
- upper=np.linspace(0.5, 1, timesteps.size) * 100,
- coords=(timesteps,),
+ lower=flow.relative_minimum * 1e-5,
+ upper=flow.relative_maximum * 100,
+ coords=model.get_coords(),
),
)
assert_conequal(
- model.constraints['Sink(Wärme)|lb_Sink(Wärme)|flow_rate'],
- flow.model.variables['Sink(Wärme)|flow_rate']
- >= flow.model.variables['Sink(Wärme)|size']
- * xr.DataArray(np.linspace(0.1, 0.5, timesteps.size), coords=(timesteps,)),
+ model.constraints['Sink(Wärme)|flow_rate|lb'],
+ flow.submodel.variables['Sink(Wärme)|flow_rate']
+ >= flow.submodel.variables['Sink(Wärme)|size'] * flow.relative_minimum,
)
assert_conequal(
- model.constraints['Sink(Wärme)|ub_Sink(Wärme)|flow_rate'],
- flow.model.variables['Sink(Wärme)|flow_rate']
- <= flow.model.variables['Sink(Wärme)|size']
- * xr.DataArray(np.linspace(0.5, 1, timesteps.size), coords=(timesteps,)),
+ model.constraints['Sink(Wärme)|flow_rate|ub'],
+ flow.submodel.variables['Sink(Wärme)|flow_rate']
+ <= flow.submodel.variables['Sink(Wärme)|size'] * flow.relative_maximum,
)
- def test_flow_invest_fixed_size(self, basic_flow_system_linopy):
+ def test_flow_invest_fixed_size(self, basic_flow_system_linopy_coords, coords_config):
"""Test flow with fixed size investment."""
- flow_system = basic_flow_system_linopy
- timesteps = flow_system.time_series_collection.timesteps
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
flow = fx.Flow(
'Wärme',
bus='Fernwärme',
- size=fx.InvestParameters(fixed_size=75, optional=False),
+ size=fx.InvestParameters(fixed_size=75, mandatory=True),
relative_minimum=0.2,
relative_maximum=0.9,
)
- flow_system.add_elements(fx.Sink('Sink', sink=flow))
+ flow_system.add_elements(fx.Sink('Sink', inputs=[flow]))
model = create_linopy_model(flow_system)
- assert set(flow.model.variables) == {
- 'Sink(Wärme)|total_flow_hours',
- 'Sink(Wärme)|flow_rate',
- 'Sink(Wärme)|size',
- }
+ assert_sets_equal(
+ set(flow.submodel.variables),
+ {'Sink(Wärme)|total_flow_hours', 'Sink(Wärme)|flow_rate', 'Sink(Wärme)|size'},
+ msg='Incorrect variables',
+ )
# Check that size is fixed to 75
- assert_var_equal(flow.model.variables['Sink(Wärme)|size'], model.add_variables(lower=75, upper=75))
+ assert_var_equal(
+ flow.submodel.variables['Sink(Wärme)|size'],
+ model.add_variables(lower=75, upper=75, coords=model.get_coords(['period', 'scenario'])),
+ )
# Check flow rate bounds
- assert_var_equal(flow.model.flow_rate, model.add_variables(lower=0.2 * 75, upper=0.9 * 75, coords=(timesteps,)))
+ assert_var_equal(
+ flow.submodel.flow_rate, model.add_variables(lower=0.2 * 75, upper=0.9 * 75, coords=model.get_coords())
+ )
- def test_flow_invest_with_effects(self, basic_flow_system_linopy):
+ def test_flow_invest_with_effects(self, basic_flow_system_linopy_coords, coords_config):
"""Test flow with investment effects."""
- flow_system = basic_flow_system_linopy
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
# Create effects
co2 = fx.Effect(label='CO2', unit='ton', description='CO2 emissions')
@@ -392,35 +458,36 @@ def test_flow_invest_with_effects(self, basic_flow_system_linopy):
size=fx.InvestParameters(
minimum_size=20,
maximum_size=100,
- optional=True,
- fix_effects={'Costs': 1000, 'CO2': 5}, # Fixed investment effects
- specific_effects={'Costs': 500, 'CO2': 0.1}, # Specific investment effects
+ mandatory=False,
+ effects_of_investment={'costs': 1000, 'CO2': 5}, # Fixed investment effects
+ effects_of_investment_per_size={'costs': 500, 'CO2': 0.1}, # Specific investment effects
),
)
- flow_system.add_elements(fx.Sink('Sink', sink=flow), co2)
+ flow_system.add_elements(fx.Sink('Sink', inputs=[flow]), co2)
model = create_linopy_model(flow_system)
# Check investment effects
- assert 'Sink(Wärme)->Costs(invest)' in model.variables
- assert 'Sink(Wärme)->CO2(invest)' in model.variables
+ assert 'Sink(Wärme)->costs(periodic)' in model.variables
+ assert 'Sink(Wärme)->CO2(periodic)' in model.variables
- # Check fix effects (applied only when is_invested=1)
+ # Check fix effects (applied only when invested=1)
assert_conequal(
- model.constraints['Sink(Wärme)->Costs(invest)'],
- model.variables['Sink(Wärme)->Costs(invest)']
- == flow.model.variables['Sink(Wärme)|is_invested'] * 1000 + flow.model.variables['Sink(Wärme)|size'] * 500,
+ model.constraints['Sink(Wärme)->costs(periodic)'],
+ model.variables['Sink(Wärme)->costs(periodic)']
+ == flow.submodel.variables['Sink(Wärme)|invested'] * 1000
+ + flow.submodel.variables['Sink(Wärme)|size'] * 500,
)
assert_conequal(
- model.constraints['Sink(Wärme)->CO2(invest)'],
- model.variables['Sink(Wärme)->CO2(invest)']
- == flow.model.variables['Sink(Wärme)|is_invested'] * 5 + flow.model.variables['Sink(Wärme)|size'] * 0.1,
+ model.constraints['Sink(Wärme)->CO2(periodic)'],
+ model.variables['Sink(Wärme)->CO2(periodic)']
+ == flow.submodel.variables['Sink(Wärme)|invested'] * 5 + flow.submodel.variables['Sink(Wärme)|size'] * 0.1,
)
- def test_flow_invest_divest_effects(self, basic_flow_system_linopy):
+ def test_flow_invest_divest_effects(self, basic_flow_system_linopy_coords, coords_config):
"""Test flow with divestment effects."""
- flow_system = basic_flow_system_linopy
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
flow = fx.Flow(
'Wärme',
@@ -428,136 +495,153 @@ def test_flow_invest_divest_effects(self, basic_flow_system_linopy):
size=fx.InvestParameters(
minimum_size=20,
maximum_size=100,
- optional=True,
- divest_effects={'Costs': 500}, # Cost incurred when NOT investing
+ mandatory=False,
+ effects_of_retirement={'costs': 500}, # Cost incurred when NOT investing
),
)
- flow_system.add_elements(fx.Sink('Sink', sink=flow))
+ flow_system.add_elements(fx.Sink('Sink', inputs=[flow]))
model = create_linopy_model(flow_system)
# Check divestment effects
- assert 'Sink(Wärme)->Costs(invest)' in model.constraints
+ assert 'Sink(Wärme)->costs(periodic)' in model.constraints
assert_conequal(
- model.constraints['Sink(Wärme)->Costs(invest)'],
- model.variables['Sink(Wärme)->Costs(invest)'] + (model.variables['Sink(Wärme)|is_invested'] - 1) * 500 == 0,
+ model.constraints['Sink(Wärme)->costs(periodic)'],
+ model.variables['Sink(Wärme)->costs(periodic)'] + (model.variables['Sink(Wärme)|invested'] - 1) * 500 == 0,
)
class TestFlowOnModel:
"""Test the FlowModel class."""
- def test_flow_on(self, basic_flow_system_linopy):
- flow_system = basic_flow_system_linopy
- timesteps = flow_system.time_series_collection.timesteps
+ def test_flow_on(self, basic_flow_system_linopy_coords, coords_config):
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
+
flow = fx.Flow(
'Wärme',
bus='Fernwärme',
size=100,
- relative_minimum=xr.DataArray(0.2, coords=(timesteps,)),
- relative_maximum=xr.DataArray(0.8, coords=(timesteps,)),
+ relative_minimum=0.2,
+ relative_maximum=0.8,
on_off_parameters=fx.OnOffParameters(),
)
- flow_system.add_elements(fx.Sink('Sink', sink=flow))
+ flow_system.add_elements(fx.Sink('Sink', inputs=[flow]))
model = create_linopy_model(flow_system)
- assert set(flow.model.variables) == set(
- ['Sink(Wärme)|total_flow_hours', 'Sink(Wärme)|flow_rate', 'Sink(Wärme)|on', 'Sink(Wärme)|on_hours_total']
+ assert_sets_equal(
+ set(flow.submodel.variables),
+ {'Sink(Wärme)|total_flow_hours', 'Sink(Wärme)|flow_rate', 'Sink(Wärme)|on', 'Sink(Wärme)|on_hours_total'},
+ msg='Incorrect variables',
)
- assert set(flow.model.constraints) == set(
- [
+ assert_sets_equal(
+ set(flow.submodel.constraints),
+ {
'Sink(Wärme)|total_flow_hours',
'Sink(Wärme)|on_hours_total',
- 'Sink(Wärme)|on_con1',
- 'Sink(Wärme)|on_con2',
- ]
+ 'Sink(Wärme)|flow_rate|lb',
+ 'Sink(Wärme)|flow_rate|ub',
+ },
+ msg='Incorrect constraints',
)
# flow_rate
assert_var_equal(
- flow.model.flow_rate,
+ flow.submodel.flow_rate,
model.add_variables(
lower=0,
upper=0.8 * 100,
- coords=(timesteps,),
+ coords=model.get_coords(),
),
)
# OnOff
assert_var_equal(
- flow.model.on_off.on,
- model.add_variables(binary=True, coords=(timesteps,)),
+ flow.submodel.on_off.on,
+ model.add_variables(binary=True, coords=model.get_coords()),
)
assert_var_equal(
model.variables['Sink(Wärme)|on_hours_total'],
- model.add_variables(lower=0),
+ model.add_variables(lower=0, coords=model.get_coords(['period', 'scenario'])),
)
assert_conequal(
- model.constraints['Sink(Wärme)|on_con1'],
- flow.model.variables['Sink(Wärme)|on'] * 0.2 * 100 <= flow.model.variables['Sink(Wärme)|flow_rate'],
+ model.constraints['Sink(Wärme)|flow_rate|lb'],
+ flow.submodel.variables['Sink(Wärme)|flow_rate'] >= flow.submodel.variables['Sink(Wärme)|on'] * 0.2 * 100,
)
assert_conequal(
- model.constraints['Sink(Wärme)|on_con2'],
- flow.model.variables['Sink(Wärme)|on'] * 0.8 * 100 >= flow.model.variables['Sink(Wärme)|flow_rate'],
+ model.constraints['Sink(Wärme)|flow_rate|ub'],
+ flow.submodel.variables['Sink(Wärme)|flow_rate'] <= flow.submodel.variables['Sink(Wärme)|on'] * 0.8 * 100,
)
assert_conequal(
model.constraints['Sink(Wärme)|on_hours_total'],
- flow.model.variables['Sink(Wärme)|on_hours_total']
- == (flow.model.variables['Sink(Wärme)|on'] * model.hours_per_step).sum(),
+ flow.submodel.variables['Sink(Wärme)|on_hours_total']
+ == (flow.submodel.variables['Sink(Wärme)|on'] * model.hours_per_step).sum('time'),
)
- def test_effects_per_running_hour(self, basic_flow_system_linopy):
- flow_system = basic_flow_system_linopy
- timesteps = flow_system.time_series_collection.timesteps
+ def test_effects_per_running_hour(self, basic_flow_system_linopy_coords, coords_config):
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
+ timesteps = flow_system.timesteps
- costs_per_running_hour = xr.DataArray(np.linspace(1, 2, timesteps.size), coords=(timesteps,))
- co2_per_running_hour = xr.DataArray(np.linspace(4, 5, timesteps.size), coords=(timesteps,))
+ costs_per_running_hour = np.linspace(1, 2, timesteps.size)
+ co2_per_running_hour = np.linspace(4, 5, timesteps.size)
flow = fx.Flow(
'Wärme',
bus='Fernwärme',
on_off_parameters=fx.OnOffParameters(
- effects_per_running_hour={'Costs': costs_per_running_hour, 'CO2': co2_per_running_hour}
+ effects_per_running_hour={'costs': costs_per_running_hour, 'CO2': co2_per_running_hour}
),
)
- flow_system.add_elements(fx.Sink('Sink', sink=flow), fx.Effect('CO2', 't', ''))
+ flow_system.add_elements(fx.Sink('Sink', inputs=[flow]), fx.Effect('CO2', 't', ''))
model = create_linopy_model(flow_system)
- costs, co2 = flow_system.effects['Costs'], flow_system.effects['CO2']
-
- assert set(flow.model.variables) == {
- 'Sink(Wärme)|total_flow_hours',
- 'Sink(Wärme)|flow_rate',
- 'Sink(Wärme)|on',
- 'Sink(Wärme)|on_hours_total',
- }
- assert set(flow.model.constraints) == {
- 'Sink(Wärme)|total_flow_hours',
- 'Sink(Wärme)|on_con1',
- 'Sink(Wärme)|on_con2',
- 'Sink(Wärme)|on_hours_total',
- }
-
- assert 'Sink(Wärme)->Costs(operation)' in set(costs.model.constraints)
- assert 'Sink(Wärme)->CO2(operation)' in set(co2.model.constraints)
+ costs, co2 = flow_system.effects['costs'], flow_system.effects['CO2']
+
+ assert_sets_equal(
+ set(flow.submodel.variables),
+ {
+ 'Sink(Wärme)|total_flow_hours',
+ 'Sink(Wärme)|flow_rate',
+ 'Sink(Wärme)|on',
+ 'Sink(Wärme)|on_hours_total',
+ },
+ msg='Incorrect variables',
+ )
+ assert_sets_equal(
+ set(flow.submodel.constraints),
+ {
+ 'Sink(Wärme)|total_flow_hours',
+ 'Sink(Wärme)|flow_rate|lb',
+ 'Sink(Wärme)|flow_rate|ub',
+ 'Sink(Wärme)|on_hours_total',
+ },
+ msg='Incorrect constraints',
+ )
+
+ assert 'Sink(Wärme)->costs(temporal)' in set(costs.submodel.constraints)
+ assert 'Sink(Wärme)->CO2(temporal)' in set(co2.submodel.constraints)
+
+ costs_per_running_hour = flow.on_off_parameters.effects_per_running_hour['costs']
+ co2_per_running_hour = flow.on_off_parameters.effects_per_running_hour['CO2']
+
+ assert costs_per_running_hour.dims == tuple(model.get_coords())
+ assert co2_per_running_hour.dims == tuple(model.get_coords())
assert_conequal(
- model.constraints['Sink(Wärme)->Costs(operation)'],
- model.variables['Sink(Wärme)->Costs(operation)']
- == flow.model.variables['Sink(Wärme)|on'] * model.hours_per_step * costs_per_running_hour,
+ model.constraints['Sink(Wärme)->costs(temporal)'],
+ model.variables['Sink(Wärme)->costs(temporal)']
+ == flow.submodel.variables['Sink(Wärme)|on'] * model.hours_per_step * costs_per_running_hour,
)
assert_conequal(
- model.constraints['Sink(Wärme)->CO2(operation)'],
- model.variables['Sink(Wärme)->CO2(operation)']
- == flow.model.variables['Sink(Wärme)|on'] * model.hours_per_step * co2_per_running_hour,
+ model.constraints['Sink(Wärme)->CO2(temporal)'],
+ model.variables['Sink(Wärme)->CO2(temporal)']
+ == flow.submodel.variables['Sink(Wärme)|on'] * model.hours_per_step * co2_per_running_hour,
)
- def test_consecutive_on_hours(self, basic_flow_system_linopy):
+ def test_consecutive_on_hours(self, basic_flow_system_linopy_coords, coords_config):
"""Test flow with minimum and maximum consecutive on hours."""
- flow_system = basic_flow_system_linopy
- timesteps = flow_system.time_series_collection.timesteps
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
flow = fx.Flow(
'Wärme',
@@ -569,56 +653,67 @@ def test_consecutive_on_hours(self, basic_flow_system_linopy):
),
)
- flow_system.add_elements(fx.Sink('Sink', sink=flow))
+ flow_system.add_elements(fx.Sink('Sink', inputs=[flow]))
model = create_linopy_model(flow_system)
- assert {'Sink(Wärme)|ConsecutiveOn|hours', 'Sink(Wärme)|on'}.issubset(set(flow.model.variables))
-
- assert {
- 'Sink(Wärme)|ConsecutiveOn|con1',
- 'Sink(Wärme)|ConsecutiveOn|con2a',
- 'Sink(Wärme)|ConsecutiveOn|con2b',
- 'Sink(Wärme)|ConsecutiveOn|initial',
- 'Sink(Wärme)|ConsecutiveOn|minimum',
- }.issubset(set(flow.model.constraints))
+ assert {'Sink(Wärme)|consecutive_on_hours', 'Sink(Wärme)|on'}.issubset(set(flow.submodel.variables))
+
+ assert_sets_equal(
+ {
+ 'Sink(Wärme)|consecutive_on_hours|ub',
+ 'Sink(Wärme)|consecutive_on_hours|forward',
+ 'Sink(Wärme)|consecutive_on_hours|backward',
+ 'Sink(Wärme)|consecutive_on_hours|initial',
+ 'Sink(Wärme)|consecutive_on_hours|lb',
+ }
+ & set(flow.submodel.constraints),
+ {
+ 'Sink(Wärme)|consecutive_on_hours|ub',
+ 'Sink(Wärme)|consecutive_on_hours|forward',
+ 'Sink(Wärme)|consecutive_on_hours|backward',
+ 'Sink(Wärme)|consecutive_on_hours|initial',
+ 'Sink(Wärme)|consecutive_on_hours|lb',
+ },
+ msg='Missing consecutive on hours constraints',
+ )
assert_var_equal(
- model.variables['Sink(Wärme)|ConsecutiveOn|hours'],
- model.add_variables(lower=0, upper=8, coords=(timesteps,)),
+ model.variables['Sink(Wärme)|consecutive_on_hours'],
+ model.add_variables(lower=0, upper=8, coords=model.get_coords()),
)
mega = model.hours_per_step.sum('time')
assert_conequal(
- model.constraints['Sink(Wärme)|ConsecutiveOn|con1'],
- model.variables['Sink(Wärme)|ConsecutiveOn|hours'] <= model.variables['Sink(Wärme)|on'] * mega,
+ model.constraints['Sink(Wärme)|consecutive_on_hours|ub'],
+ model.variables['Sink(Wärme)|consecutive_on_hours'] <= model.variables['Sink(Wärme)|on'] * mega,
)
assert_conequal(
- model.constraints['Sink(Wärme)|ConsecutiveOn|con2a'],
- model.variables['Sink(Wärme)|ConsecutiveOn|hours'].isel(time=slice(1, None))
- <= model.variables['Sink(Wärme)|ConsecutiveOn|hours'].isel(time=slice(None, -1))
+ model.constraints['Sink(Wärme)|consecutive_on_hours|forward'],
+ model.variables['Sink(Wärme)|consecutive_on_hours'].isel(time=slice(1, None))
+ <= model.variables['Sink(Wärme)|consecutive_on_hours'].isel(time=slice(None, -1))
+ model.hours_per_step.isel(time=slice(None, -1)),
)
# eq: duration(t) >= duration(t - 1) + dt(t) + (On(t) - 1) * BIG
assert_conequal(
- model.constraints['Sink(Wärme)|ConsecutiveOn|con2b'],
- model.variables['Sink(Wärme)|ConsecutiveOn|hours'].isel(time=slice(1, None))
- >= model.variables['Sink(Wärme)|ConsecutiveOn|hours'].isel(time=slice(None, -1))
+ model.constraints['Sink(Wärme)|consecutive_on_hours|backward'],
+ model.variables['Sink(Wärme)|consecutive_on_hours'].isel(time=slice(1, None))
+ >= model.variables['Sink(Wärme)|consecutive_on_hours'].isel(time=slice(None, -1))
+ model.hours_per_step.isel(time=slice(None, -1))
+ (model.variables['Sink(Wärme)|on'].isel(time=slice(1, None)) - 1) * mega,
)
assert_conequal(
- model.constraints['Sink(Wärme)|ConsecutiveOn|initial'],
- model.variables['Sink(Wärme)|ConsecutiveOn|hours'].isel(time=0)
+ model.constraints['Sink(Wärme)|consecutive_on_hours|initial'],
+ model.variables['Sink(Wärme)|consecutive_on_hours'].isel(time=0)
== model.variables['Sink(Wärme)|on'].isel(time=0) * model.hours_per_step.isel(time=0),
)
assert_conequal(
- model.constraints['Sink(Wärme)|ConsecutiveOn|minimum'],
- model.variables['Sink(Wärme)|ConsecutiveOn|hours']
+ model.constraints['Sink(Wärme)|consecutive_on_hours|lb'],
+ model.variables['Sink(Wärme)|consecutive_on_hours']
>= (
model.variables['Sink(Wärme)|on'].isel(time=slice(None, -1))
- model.variables['Sink(Wärme)|on'].isel(time=slice(1, None))
@@ -626,10 +721,9 @@ def test_consecutive_on_hours(self, basic_flow_system_linopy):
* 2,
)
- def test_consecutive_on_hours_previous(self, basic_flow_system_linopy):
+ def test_consecutive_on_hours_previous(self, basic_flow_system_linopy_coords, coords_config):
"""Test flow with minimum and maximum consecutive on hours."""
- flow_system = basic_flow_system_linopy
- timesteps = flow_system.time_series_collection.timesteps
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
flow = fx.Flow(
'Wärme',
@@ -642,56 +736,65 @@ def test_consecutive_on_hours_previous(self, basic_flow_system_linopy):
previous_flow_rate=np.array([10, 20, 30, 0, 20, 20, 30]), # Previously on for 3 steps
)
- flow_system.add_elements(fx.Sink('Sink', sink=flow))
+ flow_system.add_elements(fx.Sink('Sink', inputs=[flow]))
model = create_linopy_model(flow_system)
- assert {'Sink(Wärme)|ConsecutiveOn|hours', 'Sink(Wärme)|on'}.issubset(set(flow.model.variables))
-
- assert {
- 'Sink(Wärme)|ConsecutiveOn|con1',
- 'Sink(Wärme)|ConsecutiveOn|con2a',
- 'Sink(Wärme)|ConsecutiveOn|con2b',
- 'Sink(Wärme)|ConsecutiveOn|initial',
- 'Sink(Wärme)|ConsecutiveOn|minimum',
- }.issubset(set(flow.model.constraints))
+ assert {'Sink(Wärme)|consecutive_on_hours', 'Sink(Wärme)|on'}.issubset(set(flow.submodel.variables))
+
+ assert_sets_equal(
+ {
+ 'Sink(Wärme)|consecutive_on_hours|lb',
+ 'Sink(Wärme)|consecutive_on_hours|forward',
+ 'Sink(Wärme)|consecutive_on_hours|backward',
+ 'Sink(Wärme)|consecutive_on_hours|initial',
+ }
+ & set(flow.submodel.constraints),
+ {
+ 'Sink(Wärme)|consecutive_on_hours|lb',
+ 'Sink(Wärme)|consecutive_on_hours|forward',
+ 'Sink(Wärme)|consecutive_on_hours|backward',
+ 'Sink(Wärme)|consecutive_on_hours|initial',
+ },
+ msg='Missing consecutive on hours constraints for previous states',
+ )
assert_var_equal(
- model.variables['Sink(Wärme)|ConsecutiveOn|hours'],
- model.add_variables(lower=0, upper=8, coords=(timesteps,)),
+ model.variables['Sink(Wärme)|consecutive_on_hours'],
+ model.add_variables(lower=0, upper=8, coords=model.get_coords()),
)
mega = model.hours_per_step.sum('time') + model.hours_per_step.isel(time=0) * 3
assert_conequal(
- model.constraints['Sink(Wärme)|ConsecutiveOn|con1'],
- model.variables['Sink(Wärme)|ConsecutiveOn|hours'] <= model.variables['Sink(Wärme)|on'] * mega,
+ model.constraints['Sink(Wärme)|consecutive_on_hours|ub'],
+ model.variables['Sink(Wärme)|consecutive_on_hours'] <= model.variables['Sink(Wärme)|on'] * mega,
)
assert_conequal(
- model.constraints['Sink(Wärme)|ConsecutiveOn|con2a'],
- model.variables['Sink(Wärme)|ConsecutiveOn|hours'].isel(time=slice(1, None))
- <= model.variables['Sink(Wärme)|ConsecutiveOn|hours'].isel(time=slice(None, -1))
+ model.constraints['Sink(Wärme)|consecutive_on_hours|forward'],
+ model.variables['Sink(Wärme)|consecutive_on_hours'].isel(time=slice(1, None))
+ <= model.variables['Sink(Wärme)|consecutive_on_hours'].isel(time=slice(None, -1))
+ model.hours_per_step.isel(time=slice(None, -1)),
)
# eq: duration(t) >= duration(t - 1) + dt(t) + (On(t) - 1) * BIG
assert_conequal(
- model.constraints['Sink(Wärme)|ConsecutiveOn|con2b'],
- model.variables['Sink(Wärme)|ConsecutiveOn|hours'].isel(time=slice(1, None))
- >= model.variables['Sink(Wärme)|ConsecutiveOn|hours'].isel(time=slice(None, -1))
+ model.constraints['Sink(Wärme)|consecutive_on_hours|backward'],
+ model.variables['Sink(Wärme)|consecutive_on_hours'].isel(time=slice(1, None))
+ >= model.variables['Sink(Wärme)|consecutive_on_hours'].isel(time=slice(None, -1))
+ model.hours_per_step.isel(time=slice(None, -1))
+ (model.variables['Sink(Wärme)|on'].isel(time=slice(1, None)) - 1) * mega,
)
assert_conequal(
- model.constraints['Sink(Wärme)|ConsecutiveOn|initial'],
- model.variables['Sink(Wärme)|ConsecutiveOn|hours'].isel(time=0)
+ model.constraints['Sink(Wärme)|consecutive_on_hours|initial'],
+ model.variables['Sink(Wärme)|consecutive_on_hours'].isel(time=0)
== model.variables['Sink(Wärme)|on'].isel(time=0) * (model.hours_per_step.isel(time=0) * (1 + 3)),
)
assert_conequal(
- model.constraints['Sink(Wärme)|ConsecutiveOn|minimum'],
- model.variables['Sink(Wärme)|ConsecutiveOn|hours']
+ model.constraints['Sink(Wärme)|consecutive_on_hours|lb'],
+ model.variables['Sink(Wärme)|consecutive_on_hours']
>= (
model.variables['Sink(Wärme)|on'].isel(time=slice(None, -1))
- model.variables['Sink(Wärme)|on'].isel(time=slice(1, None))
@@ -699,10 +802,9 @@ def test_consecutive_on_hours_previous(self, basic_flow_system_linopy):
* 2,
)
- def test_consecutive_off_hours(self, basic_flow_system_linopy):
+ def test_consecutive_off_hours(self, basic_flow_system_linopy_coords, coords_config):
"""Test flow with minimum and maximum consecutive off hours."""
- flow_system = basic_flow_system_linopy
- timesteps = flow_system.time_series_collection.timesteps
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
flow = fx.Flow(
'Wärme',
@@ -714,56 +816,67 @@ def test_consecutive_off_hours(self, basic_flow_system_linopy):
),
)
- flow_system.add_elements(fx.Sink('Sink', sink=flow))
+ flow_system.add_elements(fx.Sink('Sink', inputs=[flow]))
model = create_linopy_model(flow_system)
- assert {'Sink(Wärme)|ConsecutiveOff|hours', 'Sink(Wärme)|off'}.issubset(set(flow.model.variables))
-
- assert {
- 'Sink(Wärme)|ConsecutiveOff|con1',
- 'Sink(Wärme)|ConsecutiveOff|con2a',
- 'Sink(Wärme)|ConsecutiveOff|con2b',
- 'Sink(Wärme)|ConsecutiveOff|initial',
- 'Sink(Wärme)|ConsecutiveOff|minimum',
- }.issubset(set(flow.model.constraints))
+ assert {'Sink(Wärme)|consecutive_off_hours', 'Sink(Wärme)|off'}.issubset(set(flow.submodel.variables))
+
+ assert_sets_equal(
+ {
+ 'Sink(Wärme)|consecutive_off_hours|ub',
+ 'Sink(Wärme)|consecutive_off_hours|forward',
+ 'Sink(Wärme)|consecutive_off_hours|backward',
+ 'Sink(Wärme)|consecutive_off_hours|initial',
+ 'Sink(Wärme)|consecutive_off_hours|lb',
+ }
+ & set(flow.submodel.constraints),
+ {
+ 'Sink(Wärme)|consecutive_off_hours|ub',
+ 'Sink(Wärme)|consecutive_off_hours|forward',
+ 'Sink(Wärme)|consecutive_off_hours|backward',
+ 'Sink(Wärme)|consecutive_off_hours|initial',
+ 'Sink(Wärme)|consecutive_off_hours|lb',
+ },
+ msg='Missing consecutive off hours constraints',
+ )
assert_var_equal(
- model.variables['Sink(Wärme)|ConsecutiveOff|hours'],
- model.add_variables(lower=0, upper=12, coords=(timesteps,)),
+ model.variables['Sink(Wärme)|consecutive_off_hours'],
+ model.add_variables(lower=0, upper=12, coords=model.get_coords()),
)
mega = model.hours_per_step.sum('time') + model.hours_per_step.isel(time=0) * 1 # previously off for 1h
assert_conequal(
- model.constraints['Sink(Wärme)|ConsecutiveOff|con1'],
- model.variables['Sink(Wärme)|ConsecutiveOff|hours'] <= model.variables['Sink(Wärme)|off'] * mega,
+ model.constraints['Sink(Wärme)|consecutive_off_hours|ub'],
+ model.variables['Sink(Wärme)|consecutive_off_hours'] <= model.variables['Sink(Wärme)|off'] * mega,
)
assert_conequal(
- model.constraints['Sink(Wärme)|ConsecutiveOff|con2a'],
- model.variables['Sink(Wärme)|ConsecutiveOff|hours'].isel(time=slice(1, None))
- <= model.variables['Sink(Wärme)|ConsecutiveOff|hours'].isel(time=slice(None, -1))
+ model.constraints['Sink(Wärme)|consecutive_off_hours|forward'],
+ model.variables['Sink(Wärme)|consecutive_off_hours'].isel(time=slice(1, None))
+ <= model.variables['Sink(Wärme)|consecutive_off_hours'].isel(time=slice(None, -1))
+ model.hours_per_step.isel(time=slice(None, -1)),
)
# eq: duration(t) >= duration(t - 1) + dt(t) + (On(t) - 1) * BIG
assert_conequal(
- model.constraints['Sink(Wärme)|ConsecutiveOff|con2b'],
- model.variables['Sink(Wärme)|ConsecutiveOff|hours'].isel(time=slice(1, None))
- >= model.variables['Sink(Wärme)|ConsecutiveOff|hours'].isel(time=slice(None, -1))
+ model.constraints['Sink(Wärme)|consecutive_off_hours|backward'],
+ model.variables['Sink(Wärme)|consecutive_off_hours'].isel(time=slice(1, None))
+ >= model.variables['Sink(Wärme)|consecutive_off_hours'].isel(time=slice(None, -1))
+ model.hours_per_step.isel(time=slice(None, -1))
+ (model.variables['Sink(Wärme)|off'].isel(time=slice(1, None)) - 1) * mega,
)
assert_conequal(
- model.constraints['Sink(Wärme)|ConsecutiveOff|initial'],
- model.variables['Sink(Wärme)|ConsecutiveOff|hours'].isel(time=0)
+ model.constraints['Sink(Wärme)|consecutive_off_hours|initial'],
+ model.variables['Sink(Wärme)|consecutive_off_hours'].isel(time=0)
== model.variables['Sink(Wärme)|off'].isel(time=0) * (model.hours_per_step.isel(time=0) * (1 + 1)),
)
assert_conequal(
- model.constraints['Sink(Wärme)|ConsecutiveOff|minimum'],
- model.variables['Sink(Wärme)|ConsecutiveOff|hours']
+ model.constraints['Sink(Wärme)|consecutive_off_hours|lb'],
+ model.variables['Sink(Wärme)|consecutive_off_hours']
>= (
model.variables['Sink(Wärme)|off'].isel(time=slice(None, -1))
- model.variables['Sink(Wärme)|off'].isel(time=slice(1, None))
@@ -771,10 +884,9 @@ def test_consecutive_off_hours(self, basic_flow_system_linopy):
* 4,
)
- def test_consecutive_off_hours_previous(self, basic_flow_system_linopy):
+ def test_consecutive_off_hours_previous(self, basic_flow_system_linopy_coords, coords_config):
"""Test flow with minimum and maximum consecutive off hours."""
- flow_system = basic_flow_system_linopy
- timesteps = flow_system.time_series_collection.timesteps
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
flow = fx.Flow(
'Wärme',
@@ -787,56 +899,67 @@ def test_consecutive_off_hours_previous(self, basic_flow_system_linopy):
previous_flow_rate=np.array([10, 20, 30, 0, 20, 0, 0]), # Previously off for 2 steps
)
- flow_system.add_elements(fx.Sink('Sink', sink=flow))
+ flow_system.add_elements(fx.Sink('Sink', inputs=[flow]))
model = create_linopy_model(flow_system)
- assert {'Sink(Wärme)|ConsecutiveOff|hours', 'Sink(Wärme)|off'}.issubset(set(flow.model.variables))
-
- assert {
- 'Sink(Wärme)|ConsecutiveOff|con1',
- 'Sink(Wärme)|ConsecutiveOff|con2a',
- 'Sink(Wärme)|ConsecutiveOff|con2b',
- 'Sink(Wärme)|ConsecutiveOff|initial',
- 'Sink(Wärme)|ConsecutiveOff|minimum',
- }.issubset(set(flow.model.constraints))
+ assert {'Sink(Wärme)|consecutive_off_hours', 'Sink(Wärme)|off'}.issubset(set(flow.submodel.variables))
+
+ assert_sets_equal(
+ {
+ 'Sink(Wärme)|consecutive_off_hours|ub',
+ 'Sink(Wärme)|consecutive_off_hours|forward',
+ 'Sink(Wärme)|consecutive_off_hours|backward',
+ 'Sink(Wärme)|consecutive_off_hours|initial',
+ 'Sink(Wärme)|consecutive_off_hours|lb',
+ }
+ & set(flow.submodel.constraints),
+ {
+ 'Sink(Wärme)|consecutive_off_hours|ub',
+ 'Sink(Wärme)|consecutive_off_hours|forward',
+ 'Sink(Wärme)|consecutive_off_hours|backward',
+ 'Sink(Wärme)|consecutive_off_hours|initial',
+ 'Sink(Wärme)|consecutive_off_hours|lb',
+ },
+ msg='Missing consecutive off hours constraints for previous states',
+ )
assert_var_equal(
- model.variables['Sink(Wärme)|ConsecutiveOff|hours'],
- model.add_variables(lower=0, upper=12, coords=(timesteps,)),
+ model.variables['Sink(Wärme)|consecutive_off_hours'],
+ model.add_variables(lower=0, upper=12, coords=model.get_coords()),
)
mega = model.hours_per_step.sum('time') + model.hours_per_step.isel(time=0) * 2
assert_conequal(
- model.constraints['Sink(Wärme)|ConsecutiveOff|con1'],
- model.variables['Sink(Wärme)|ConsecutiveOff|hours'] <= model.variables['Sink(Wärme)|off'] * mega,
+ model.constraints['Sink(Wärme)|consecutive_off_hours|ub'],
+ model.variables['Sink(Wärme)|consecutive_off_hours'] <= model.variables['Sink(Wärme)|off'] * mega,
)
assert_conequal(
- model.constraints['Sink(Wärme)|ConsecutiveOff|con2a'],
- model.variables['Sink(Wärme)|ConsecutiveOff|hours'].isel(time=slice(1, None))
- <= model.variables['Sink(Wärme)|ConsecutiveOff|hours'].isel(time=slice(None, -1))
+ model.constraints['Sink(Wärme)|consecutive_off_hours|forward'],
+ model.variables['Sink(Wärme)|consecutive_off_hours'].isel(time=slice(1, None))
+ <= model.variables['Sink(Wärme)|consecutive_off_hours'].isel(time=slice(None, -1))
+ model.hours_per_step.isel(time=slice(None, -1)),
)
# eq: duration(t) >= duration(t - 1) + dt(t) + (On(t) - 1) * BIG
assert_conequal(
- model.constraints['Sink(Wärme)|ConsecutiveOff|con2b'],
- model.variables['Sink(Wärme)|ConsecutiveOff|hours'].isel(time=slice(1, None))
- >= model.variables['Sink(Wärme)|ConsecutiveOff|hours'].isel(time=slice(None, -1))
+ model.constraints['Sink(Wärme)|consecutive_off_hours|backward'],
+ model.variables['Sink(Wärme)|consecutive_off_hours'].isel(time=slice(1, None))
+ >= model.variables['Sink(Wärme)|consecutive_off_hours'].isel(time=slice(None, -1))
+ model.hours_per_step.isel(time=slice(None, -1))
+ (model.variables['Sink(Wärme)|off'].isel(time=slice(1, None)) - 1) * mega,
)
assert_conequal(
- model.constraints['Sink(Wärme)|ConsecutiveOff|initial'],
- model.variables['Sink(Wärme)|ConsecutiveOff|hours'].isel(time=0)
+ model.constraints['Sink(Wärme)|consecutive_off_hours|initial'],
+ model.variables['Sink(Wärme)|consecutive_off_hours'].isel(time=0)
== model.variables['Sink(Wärme)|off'].isel(time=0) * (model.hours_per_step.isel(time=0) * (1 + 2)),
)
assert_conequal(
- model.constraints['Sink(Wärme)|ConsecutiveOff|minimum'],
- model.variables['Sink(Wärme)|ConsecutiveOff|hours']
+ model.constraints['Sink(Wärme)|consecutive_off_hours|lb'],
+ model.variables['Sink(Wärme)|consecutive_off_hours']
>= (
model.variables['Sink(Wärme)|off'].isel(time=slice(None, -1))
- model.variables['Sink(Wärme)|off'].isel(time=slice(1, None))
@@ -844,9 +967,9 @@ def test_consecutive_off_hours_previous(self, basic_flow_system_linopy):
* 4,
)
- def test_switch_on_constraints(self, basic_flow_system_linopy):
+ def test_switch_on_constraints(self, basic_flow_system_linopy_coords, coords_config):
"""Test flow with constraints on the number of startups."""
- flow_system = basic_flow_system_linopy
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
flow = fx.Flow(
'Wärme',
@@ -854,48 +977,61 @@ def test_switch_on_constraints(self, basic_flow_system_linopy):
size=100,
on_off_parameters=fx.OnOffParameters(
switch_on_total_max=5, # Maximum 5 startups
- effects_per_switch_on={'Costs': 100}, # 100 EUR startup cost
+ effects_per_switch_on={'costs': 100}, # 100 EUR startup cost
),
)
- flow_system.add_elements(fx.Sink('Sink', sink=flow))
+ flow_system.add_elements(fx.Sink('Sink', inputs=[flow]))
model = create_linopy_model(flow_system)
# Check that variables exist
- assert {'Sink(Wärme)|switch_on', 'Sink(Wärme)|switch_off', 'Sink(Wärme)|switch_on_nr'}.issubset(
- set(flow.model.variables)
+ assert {'Sink(Wärme)|switch|on', 'Sink(Wärme)|switch|off', 'Sink(Wärme)|switch|count'}.issubset(
+ set(flow.submodel.variables)
)
# Check that constraints exist
- assert {
- 'Sink(Wärme)|switch_con',
- 'Sink(Wärme)|initial_switch_con',
- 'Sink(Wärme)|switch_on_or_off',
- 'Sink(Wärme)|switch_on_nr',
- }.issubset(set(flow.model.constraints))
+ assert_sets_equal(
+ {
+ 'Sink(Wärme)|switch|transition',
+ 'Sink(Wärme)|switch|initial',
+ 'Sink(Wärme)|switch|mutex',
+ 'Sink(Wärme)|switch|count',
+ }
+ & set(flow.submodel.constraints),
+ {
+ 'Sink(Wärme)|switch|transition',
+ 'Sink(Wärme)|switch|initial',
+ 'Sink(Wärme)|switch|mutex',
+ 'Sink(Wärme)|switch|count',
+ },
+ msg='Missing switch constraints',
+ )
# Check switch_on_nr variable bounds
- assert_var_equal(flow.model.variables['Sink(Wärme)|switch_on_nr'], model.add_variables(lower=0, upper=5))
+ assert_var_equal(
+ flow.submodel.variables['Sink(Wärme)|switch|count'],
+ model.add_variables(lower=0, upper=5, coords=model.get_coords(['period', 'scenario'])),
+ )
# Verify switch_on_nr constraint (limits number of startups)
assert_conequal(
- model.constraints['Sink(Wärme)|switch_on_nr'],
- flow.model.variables['Sink(Wärme)|switch_on_nr']
- == flow.model.variables['Sink(Wärme)|switch_on'].sum('time'),
+ model.constraints['Sink(Wärme)|switch|count'],
+ flow.submodel.variables['Sink(Wärme)|switch|count']
+ == flow.submodel.variables['Sink(Wärme)|switch|on'].sum('time'),
)
# Check that startup cost effect constraint exists
- assert 'Sink(Wärme)->Costs(operation)' in model.constraints
+ assert 'Sink(Wärme)->costs(temporal)' in model.constraints
# Verify the startup cost effect constraint
assert_conequal(
- model.constraints['Sink(Wärme)->Costs(operation)'],
- model.variables['Sink(Wärme)->Costs(operation)'] == flow.model.variables['Sink(Wärme)|switch_on'] * 100,
+ model.constraints['Sink(Wärme)->costs(temporal)'],
+ model.variables['Sink(Wärme)->costs(temporal)'] == flow.submodel.variables['Sink(Wärme)|switch|on'] * 100,
)
- def test_on_hours_limits(self, basic_flow_system_linopy):
+ def test_on_hours_limits(self, basic_flow_system_linopy_coords, coords_config):
"""Test flow with limits on total on hours."""
- flow_system = basic_flow_system_linopy
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
flow = fx.Flow(
'Wärme',
@@ -907,224 +1043,258 @@ def test_on_hours_limits(self, basic_flow_system_linopy):
),
)
- flow_system.add_elements(fx.Sink('Sink', sink=flow))
+ flow_system.add_elements(fx.Sink('Sink', inputs=[flow]))
model = create_linopy_model(flow_system)
# Check that variables exist
- assert {'Sink(Wärme)|on', 'Sink(Wärme)|on_hours_total'}.issubset(set(flow.model.variables))
+ assert {'Sink(Wärme)|on', 'Sink(Wärme)|on_hours_total'}.issubset(set(flow.submodel.variables))
# Check that constraints exist
assert 'Sink(Wärme)|on_hours_total' in model.constraints
# Check on_hours_total variable bounds
- assert_var_equal(flow.model.variables['Sink(Wärme)|on_hours_total'], model.add_variables(lower=20, upper=100))
+ assert_var_equal(
+ flow.submodel.variables['Sink(Wärme)|on_hours_total'],
+ model.add_variables(lower=20, upper=100, coords=model.get_coords(['period', 'scenario'])),
+ )
# Check on_hours_total constraint
assert_conequal(
model.constraints['Sink(Wärme)|on_hours_total'],
- flow.model.variables['Sink(Wärme)|on_hours_total']
- == (flow.model.variables['Sink(Wärme)|on'] * model.hours_per_step).sum(),
+ flow.submodel.variables['Sink(Wärme)|on_hours_total']
+ == (flow.submodel.variables['Sink(Wärme)|on'] * model.hours_per_step).sum('time'),
)
class TestFlowOnInvestModel:
"""Test the FlowModel class."""
- def test_flow_on_invest_optional(self, basic_flow_system_linopy):
- flow_system = basic_flow_system_linopy
- timesteps = flow_system.time_series_collection.timesteps
+ def test_flow_on_invest_optional(self, basic_flow_system_linopy_coords, coords_config):
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
flow = fx.Flow(
'Wärme',
bus='Fernwärme',
- size=fx.InvestParameters(minimum_size=20, maximum_size=200, optional=True),
- relative_minimum=xr.DataArray(0.2, coords=(timesteps,)),
- relative_maximum=xr.DataArray(0.8, coords=(timesteps,)),
+ size=fx.InvestParameters(minimum_size=20, maximum_size=200, mandatory=False),
+ relative_minimum=0.2,
+ relative_maximum=0.8,
on_off_parameters=fx.OnOffParameters(),
)
- flow_system.add_elements(fx.Sink('Sink', sink=flow))
+ flow_system.add_elements(fx.Sink('Sink', inputs=[flow]))
model = create_linopy_model(flow_system)
- assert set(flow.model.variables) == set(
- [
+ assert_sets_equal(
+ set(flow.submodel.variables),
+ {
'Sink(Wärme)|total_flow_hours',
'Sink(Wärme)|flow_rate',
- 'Sink(Wärme)|is_invested',
+ 'Sink(Wärme)|invested',
'Sink(Wärme)|size',
'Sink(Wärme)|on',
'Sink(Wärme)|on_hours_total',
- ]
+ },
+ msg='Incorrect variables',
)
- assert set(flow.model.constraints) == set(
- [
+ assert_sets_equal(
+ set(flow.submodel.constraints),
+ {
'Sink(Wärme)|total_flow_hours',
'Sink(Wärme)|on_hours_total',
- 'Sink(Wärme)|on_con1',
- 'Sink(Wärme)|on_con2',
- 'Sink(Wärme)|is_invested_lb',
- 'Sink(Wärme)|is_invested_ub',
- 'Sink(Wärme)|lb_Sink(Wärme)|flow_rate',
- 'Sink(Wärme)|ub_Sink(Wärme)|flow_rate',
- ]
+ 'Sink(Wärme)|flow_rate|lb1',
+ 'Sink(Wärme)|flow_rate|ub1',
+ 'Sink(Wärme)|size|lb',
+ 'Sink(Wärme)|size|ub',
+ 'Sink(Wärme)|flow_rate|lb2',
+ 'Sink(Wärme)|flow_rate|ub2',
+ },
+ msg='Incorrect constraints',
)
# flow_rate
assert_var_equal(
- flow.model.flow_rate,
+ flow.submodel.flow_rate,
model.add_variables(
lower=0,
upper=0.8 * 200,
- coords=(timesteps,),
+ coords=model.get_coords(),
),
)
# OnOff
assert_var_equal(
- flow.model.on_off.on,
- model.add_variables(binary=True, coords=(timesteps,)),
+ flow.submodel.on_off.on,
+ model.add_variables(binary=True, coords=model.get_coords()),
)
assert_var_equal(
model.variables['Sink(Wärme)|on_hours_total'],
- model.add_variables(lower=0),
+ model.add_variables(lower=0, coords=model.get_coords(['period', 'scenario'])),
)
assert_conequal(
- model.constraints['Sink(Wärme)|on_con1'],
- flow.model.variables['Sink(Wärme)|on'] * 0.2 * 20 <= flow.model.variables['Sink(Wärme)|flow_rate'],
+ model.constraints['Sink(Wärme)|size|lb'],
+ flow.submodel.variables['Sink(Wärme)|size'] >= flow.submodel.variables['Sink(Wärme)|invested'] * 20,
)
assert_conequal(
- model.constraints['Sink(Wärme)|on_con2'],
- flow.model.variables['Sink(Wärme)|on'] * 0.8 * 200 >= flow.model.variables['Sink(Wärme)|flow_rate'],
+ model.constraints['Sink(Wärme)|size|ub'],
+ flow.submodel.variables['Sink(Wärme)|size'] <= flow.submodel.variables['Sink(Wärme)|invested'] * 200,
+ )
+ assert_conequal(
+ model.constraints['Sink(Wärme)|flow_rate|lb1'],
+ flow.submodel.variables['Sink(Wärme)|on'] * 0.2 * 20 <= flow.submodel.variables['Sink(Wärme)|flow_rate'],
+ )
+ assert_conequal(
+ model.constraints['Sink(Wärme)|flow_rate|ub1'],
+ flow.submodel.variables['Sink(Wärme)|on'] * 0.8 * 200 >= flow.submodel.variables['Sink(Wärme)|flow_rate'],
)
assert_conequal(
model.constraints['Sink(Wärme)|on_hours_total'],
- flow.model.variables['Sink(Wärme)|on_hours_total']
- == (flow.model.variables['Sink(Wärme)|on'] * model.hours_per_step).sum(),
+ flow.submodel.variables['Sink(Wärme)|on_hours_total']
+ == (flow.submodel.variables['Sink(Wärme)|on'] * model.hours_per_step).sum('time'),
)
# Investment
- assert_var_equal(model['Sink(Wärme)|size'], model.add_variables(lower=0, upper=200))
+ assert_var_equal(
+ model['Sink(Wärme)|size'],
+ model.add_variables(lower=0, upper=200, coords=model.get_coords(['period', 'scenario'])),
+ )
mega = 0.2 * 200 # Relative minimum * maximum size
assert_conequal(
- model.constraints['Sink(Wärme)|lb_Sink(Wärme)|flow_rate'],
- flow.model.variables['Sink(Wärme)|flow_rate']
- >= flow.model.variables['Sink(Wärme)|on'] * mega + flow.model.variables['Sink(Wärme)|size'] * 0.2 - mega,
+ model.constraints['Sink(Wärme)|flow_rate|lb2'],
+ flow.submodel.variables['Sink(Wärme)|flow_rate']
+ >= flow.submodel.variables['Sink(Wärme)|on'] * mega
+ + flow.submodel.variables['Sink(Wärme)|size'] * 0.2
+ - mega,
)
assert_conequal(
- model.constraints['Sink(Wärme)|ub_Sink(Wärme)|flow_rate'],
- flow.model.variables['Sink(Wärme)|flow_rate'] <= flow.model.variables['Sink(Wärme)|size'] * 0.8,
+ model.constraints['Sink(Wärme)|flow_rate|ub2'],
+ flow.submodel.variables['Sink(Wärme)|flow_rate'] <= flow.submodel.variables['Sink(Wärme)|size'] * 0.8,
)
- def test_flow_on_invest_non_optional(self, basic_flow_system_linopy):
- flow_system = basic_flow_system_linopy
- timesteps = flow_system.time_series_collection.timesteps
+ def test_flow_on_invest_non_optional(self, basic_flow_system_linopy_coords, coords_config):
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
flow = fx.Flow(
'Wärme',
bus='Fernwärme',
- size=fx.InvestParameters(minimum_size=20, maximum_size=200, optional=False),
- relative_minimum=xr.DataArray(0.2, coords=(timesteps,)),
- relative_maximum=xr.DataArray(0.8, coords=(timesteps,)),
+ size=fx.InvestParameters(minimum_size=20, maximum_size=200, mandatory=True),
+ relative_minimum=0.2,
+ relative_maximum=0.8,
on_off_parameters=fx.OnOffParameters(),
)
- flow_system.add_elements(fx.Sink('Sink', sink=flow))
+ flow_system.add_elements(fx.Sink('Sink', inputs=[flow]))
model = create_linopy_model(flow_system)
- assert set(flow.model.variables) == set(
- [
+ assert_sets_equal(
+ set(flow.submodel.variables),
+ {
'Sink(Wärme)|total_flow_hours',
'Sink(Wärme)|flow_rate',
'Sink(Wärme)|size',
'Sink(Wärme)|on',
'Sink(Wärme)|on_hours_total',
- ]
+ },
+ msg='Incorrect variables',
)
- assert set(flow.model.constraints) == set(
- [
+ assert_sets_equal(
+ set(flow.submodel.constraints),
+ {
'Sink(Wärme)|total_flow_hours',
'Sink(Wärme)|on_hours_total',
- 'Sink(Wärme)|on_con1',
- 'Sink(Wärme)|on_con2',
- 'Sink(Wärme)|lb_Sink(Wärme)|flow_rate',
- 'Sink(Wärme)|ub_Sink(Wärme)|flow_rate',
- ]
+ 'Sink(Wärme)|flow_rate|lb1',
+ 'Sink(Wärme)|flow_rate|ub1',
+ 'Sink(Wärme)|flow_rate|lb2',
+ 'Sink(Wärme)|flow_rate|ub2',
+ },
+ msg='Incorrect constraints',
)
# flow_rate
assert_var_equal(
- flow.model.flow_rate,
+ flow.submodel.flow_rate,
model.add_variables(
lower=0,
upper=0.8 * 200,
- coords=(timesteps,),
+ coords=model.get_coords(),
),
)
# OnOff
assert_var_equal(
- flow.model.on_off.on,
- model.add_variables(binary=True, coords=(timesteps,)),
+ flow.submodel.on_off.on,
+ model.add_variables(binary=True, coords=model.get_coords()),
)
assert_var_equal(
model.variables['Sink(Wärme)|on_hours_total'],
- model.add_variables(lower=0),
+ model.add_variables(lower=0, coords=model.get_coords(['period', 'scenario'])),
)
assert_conequal(
- model.constraints['Sink(Wärme)|on_con1'],
- flow.model.variables['Sink(Wärme)|on'] * 0.2 * 20 <= flow.model.variables['Sink(Wärme)|flow_rate'],
+ model.constraints['Sink(Wärme)|flow_rate|lb1'],
+ flow.submodel.variables['Sink(Wärme)|on'] * 0.2 * 20 <= flow.submodel.variables['Sink(Wärme)|flow_rate'],
)
assert_conequal(
- model.constraints['Sink(Wärme)|on_con2'],
- flow.model.variables['Sink(Wärme)|on'] * 0.8 * 200 >= flow.model.variables['Sink(Wärme)|flow_rate'],
+ model.constraints['Sink(Wärme)|flow_rate|ub1'],
+ flow.submodel.variables['Sink(Wärme)|on'] * 0.8 * 200 >= flow.submodel.variables['Sink(Wärme)|flow_rate'],
)
assert_conequal(
model.constraints['Sink(Wärme)|on_hours_total'],
- flow.model.variables['Sink(Wärme)|on_hours_total']
- == (flow.model.variables['Sink(Wärme)|on'] * model.hours_per_step).sum(),
+ flow.submodel.variables['Sink(Wärme)|on_hours_total']
+ == (flow.submodel.variables['Sink(Wärme)|on'] * model.hours_per_step).sum('time'),
)
# Investment
- assert_var_equal(model['Sink(Wärme)|size'], model.add_variables(lower=20, upper=200))
+ assert_var_equal(
+ model['Sink(Wärme)|size'],
+ model.add_variables(lower=20, upper=200, coords=model.get_coords(['period', 'scenario'])),
+ )
mega = 0.2 * 200 # Relative minimum * maximum size
assert_conequal(
- model.constraints['Sink(Wärme)|lb_Sink(Wärme)|flow_rate'],
- flow.model.variables['Sink(Wärme)|flow_rate']
- >= flow.model.variables['Sink(Wärme)|on'] * mega + flow.model.variables['Sink(Wärme)|size'] * 0.2 - mega,
+ model.constraints['Sink(Wärme)|flow_rate|lb2'],
+ flow.submodel.variables['Sink(Wärme)|flow_rate']
+ >= flow.submodel.variables['Sink(Wärme)|on'] * mega
+ + flow.submodel.variables['Sink(Wärme)|size'] * 0.2
+ - mega,
)
assert_conequal(
- model.constraints['Sink(Wärme)|ub_Sink(Wärme)|flow_rate'],
- flow.model.variables['Sink(Wärme)|flow_rate'] <= flow.model.variables['Sink(Wärme)|size'] * 0.8,
+ model.constraints['Sink(Wärme)|flow_rate|ub2'],
+ flow.submodel.variables['Sink(Wärme)|flow_rate'] <= flow.submodel.variables['Sink(Wärme)|size'] * 0.8,
)
class TestFlowWithFixedProfile:
"""Test Flow with fixed relative profile."""
- def test_fixed_relative_profile(self, basic_flow_system_linopy):
+ def test_fixed_relative_profile(self, basic_flow_system_linopy_coords, coords_config):
"""Test flow with a fixed relative profile."""
- flow_system = basic_flow_system_linopy
- timesteps = flow_system.time_series_collection.timesteps
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
+ timesteps = flow_system.timesteps
# Create a time-varying profile (e.g., for a load or renewable generation)
profile = np.sin(np.linspace(0, 2 * np.pi, len(timesteps))) * 0.5 + 0.5 # Values between 0 and 1
flow = fx.Flow(
- 'Wärme', bus='Fernwärme', size=100, fixed_relative_profile=xr.DataArray(profile, coords=(timesteps,))
+ 'Wärme',
+ bus='Fernwärme',
+ size=100,
+ fixed_relative_profile=profile,
)
- flow_system.add_elements(fx.Sink('Sink', sink=flow))
+ flow_system.add_elements(fx.Sink('Sink', inputs=[flow]))
model = create_linopy_model(flow_system)
assert_var_equal(
- flow.model.variables['Sink(Wärme)|flow_rate'],
- model.add_variables(lower=profile * 100, upper=profile * 100, coords=(timesteps,)),
+ flow.submodel.variables['Sink(Wärme)|flow_rate'],
+ model.add_variables(
+ lower=flow.fixed_relative_profile * 100,
+ upper=flow.fixed_relative_profile * 100,
+ coords=model.get_coords(),
+ ),
)
- def test_fixed_profile_with_investment(self, basic_flow_system_linopy):
+ def test_fixed_profile_with_investment(self, basic_flow_system_linopy_coords, coords_config):
"""Test flow with fixed profile and investment."""
- flow_system = basic_flow_system_linopy
- timesteps = flow_system.time_series_collection.timesteps
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
+ timesteps = flow_system.timesteps
# Create a fixed profile
profile = np.sin(np.linspace(0, 2 * np.pi, len(timesteps))) * 0.5 + 0.5
@@ -1132,23 +1302,23 @@ def test_fixed_profile_with_investment(self, basic_flow_system_linopy):
flow = fx.Flow(
'Wärme',
bus='Fernwärme',
- size=fx.InvestParameters(minimum_size=50, maximum_size=200, optional=True),
- fixed_relative_profile=xr.DataArray(profile, coords=(timesteps,)),
+ size=fx.InvestParameters(minimum_size=50, maximum_size=200, mandatory=False),
+ fixed_relative_profile=profile,
)
- flow_system.add_elements(fx.Sink('Sink', sink=flow))
+ flow_system.add_elements(fx.Sink('Sink', inputs=[flow]))
model = create_linopy_model(flow_system)
assert_var_equal(
- flow.model.variables['Sink(Wärme)|flow_rate'],
- model.add_variables(lower=0, upper=profile * 200, coords=(timesteps,)),
+ flow.submodel.variables['Sink(Wärme)|flow_rate'],
+ model.add_variables(lower=0, upper=flow.fixed_relative_profile * 200, coords=model.get_coords()),
)
# The constraint should link flow_rate to size * profile
assert_conequal(
- model.constraints['Sink(Wärme)|fix_Sink(Wärme)|flow_rate'],
- flow.model.variables['Sink(Wärme)|flow_rate']
- == flow.model.variables['Sink(Wärme)|size'] * xr.DataArray(profile, coords=(timesteps,)),
+ model.constraints['Sink(Wärme)|flow_rate|fixed'],
+ flow.submodel.variables['Sink(Wärme)|flow_rate']
+ == flow.submodel.variables['Sink(Wärme)|size'] * flow.fixed_relative_profile,
)
diff --git a/tests/test_functional.py b/tests/test_functional.py
index 5db83f656..0f9fe02ef 100644
--- a/tests/test_functional.py
+++ b/tests/test_functional.py
@@ -73,9 +73,9 @@ def flow_system_base(timesteps: pd.DatetimeIndex) -> fx.FlowSystem:
flow_system.add_elements(
fx.Sink(
label='Wärmelast',
- sink=fx.Flow(label='Wärme', bus='Fernwärme', fixed_relative_profile=data.thermal_demand, size=1),
+ inputs=[fx.Flow(label='Wärme', bus='Fernwärme', fixed_relative_profile=data.thermal_demand, size=1)],
),
- fx.Source(label='Gastarif', source=fx.Flow(label='Gas', bus='Gas', effects_per_flow_hour=1)),
+ fx.Source(label='Gastarif', outputs=[fx.Flow(label='Gas', bus='Gas', effects_per_flow_hour=1)]),
)
return flow_system
@@ -112,7 +112,7 @@ def test_solve_and_load(solver_fixture, time_steps_fixture):
def test_minimal_model(solver_fixture, time_steps_fixture):
results = solve_and_load(flow_system_minimal(time_steps_fixture), solver_fixture)
- assert_allclose(results.model.variables['costs|total'].solution.values, 80, rtol=1e-5, atol=1e-10)
+ assert_allclose(results.model.variables['costs'].solution.values, 80, rtol=1e-5, atol=1e-10)
assert_allclose(
results.model.variables['Boiler(Q_th)|flow_rate'].solution.values,
@@ -122,14 +122,14 @@ def test_minimal_model(solver_fixture, time_steps_fixture):
)
assert_allclose(
- results.model.variables['costs(operation)|total_per_timestep'].solution.values,
+ results.model.variables['costs(temporal)|per_timestep'].solution.values,
[-0.0, 20.0, 40.0, -0.0, 20.0],
rtol=1e-5,
atol=1e-10,
)
assert_allclose(
- results.model.variables['Gastarif(Gas)->costs(operation)'].solution.values,
+ results.model.variables['Gastarif(Gas)->costs(temporal)'].solution.values,
[-0.0, 20.0, 40.0, -0.0, 20.0],
rtol=1e-5,
atol=1e-10,
@@ -146,7 +146,7 @@ def test_fixed_size(solver_fixture, time_steps_fixture):
Q_th=fx.Flow(
'Q_th',
bus='Fernwärme',
- size=fx.InvestParameters(fixed_size=1000, fix_effects=10, specific_effects=1),
+ size=fx.InvestParameters(fixed_size=1000, effects_of_investment=10, effects_of_investment_per_size=1),
),
)
)
@@ -155,25 +155,25 @@ def test_fixed_size(solver_fixture, time_steps_fixture):
boiler = flow_system.all_elements['Boiler']
costs = flow_system.effects['costs']
assert_allclose(
- costs.model.total.solution.item(),
+ costs.submodel.total.solution.item(),
80 + 1000 * 1 + 10,
rtol=1e-5,
atol=1e-10,
err_msg='The total costs does not have the right value',
)
assert_allclose(
- boiler.Q_th.model._investment.size.solution.item(),
+ boiler.Q_th.submodel._investment.size.solution.item(),
1000,
rtol=1e-5,
atol=1e-10,
err_msg='"Boiler__Q_th__Investment_size" does not have the right value',
)
assert_allclose(
- boiler.Q_th.model._investment.is_invested.solution.item(),
+ boiler.Q_th.submodel._investment.invested.solution.item(),
1,
rtol=1e-5,
atol=1e-10,
- err_msg='"Boiler__Q_th__Investment_size" does not have the right value',
+ err_msg='"Boiler__Q_th__invested" does not have the right value',
)
@@ -187,7 +187,7 @@ def test_optimize_size(solver_fixture, time_steps_fixture):
Q_th=fx.Flow(
'Q_th',
bus='Fernwärme',
- size=fx.InvestParameters(fix_effects=10, specific_effects=1),
+ size=fx.InvestParameters(effects_of_investment=10, effects_of_investment_per_size=1),
),
)
)
@@ -196,21 +196,21 @@ def test_optimize_size(solver_fixture, time_steps_fixture):
boiler = flow_system.all_elements['Boiler']
costs = flow_system.effects['costs']
assert_allclose(
- costs.model.total.solution.item(),
+ costs.submodel.total.solution.item(),
80 + 20 * 1 + 10,
rtol=1e-5,
atol=1e-10,
err_msg='The total costs does not have the right value',
)
assert_allclose(
- boiler.Q_th.model._investment.size.solution.item(),
+ boiler.Q_th.submodel._investment.size.solution.item(),
20,
rtol=1e-5,
atol=1e-10,
err_msg='"Boiler__Q_th__Investment_size" does not have the right value',
)
assert_allclose(
- boiler.Q_th.model._investment.is_invested.solution.item(),
+ boiler.Q_th.submodel._investment.invested.solution.item(),
1,
rtol=1e-5,
atol=1e-10,
@@ -228,7 +228,7 @@ def test_size_bounds(solver_fixture, time_steps_fixture):
Q_th=fx.Flow(
'Q_th',
bus='Fernwärme',
- size=fx.InvestParameters(minimum_size=40, fix_effects=10, specific_effects=1),
+ size=fx.InvestParameters(minimum_size=40, effects_of_investment=10, effects_of_investment_per_size=1),
),
)
)
@@ -237,21 +237,21 @@ def test_size_bounds(solver_fixture, time_steps_fixture):
boiler = flow_system.all_elements['Boiler']
costs = flow_system.effects['costs']
assert_allclose(
- costs.model.total.solution.item(),
+ costs.submodel.total.solution.item(),
80 + 40 * 1 + 10,
rtol=1e-5,
atol=1e-10,
err_msg='The total costs does not have the right value',
)
assert_allclose(
- boiler.Q_th.model._investment.size.solution.item(),
+ boiler.Q_th.submodel._investment.size.solution.item(),
40,
rtol=1e-5,
atol=1e-10,
err_msg='"Boiler__Q_th__Investment_size" does not have the right value',
)
assert_allclose(
- boiler.Q_th.model._investment.is_invested.solution.item(),
+ boiler.Q_th.submodel._investment.invested.solution.item(),
1,
rtol=1e-5,
atol=1e-10,
@@ -269,7 +269,9 @@ def test_optional_invest(solver_fixture, time_steps_fixture):
Q_th=fx.Flow(
'Q_th',
bus='Fernwärme',
- size=fx.InvestParameters(optional=True, minimum_size=40, fix_effects=10, specific_effects=1),
+ size=fx.InvestParameters(
+ mandatory=False, minimum_size=40, effects_of_investment=10, effects_of_investment_per_size=1
+ ),
),
),
fx.linear_converters.Boiler(
@@ -279,7 +281,9 @@ def test_optional_invest(solver_fixture, time_steps_fixture):
Q_th=fx.Flow(
'Q_th',
bus='Fernwärme',
- size=fx.InvestParameters(optional=True, minimum_size=50, fix_effects=10, specific_effects=1),
+ size=fx.InvestParameters(
+ mandatory=False, minimum_size=50, effects_of_investment=10, effects_of_investment_per_size=1
+ ),
),
),
)
@@ -289,21 +293,21 @@ def test_optional_invest(solver_fixture, time_steps_fixture):
boiler_optional = flow_system.all_elements['Boiler_optional']
costs = flow_system.effects['costs']
assert_allclose(
- costs.model.total.solution.item(),
+ costs.submodel.total.solution.item(),
80 + 40 * 1 + 10,
rtol=1e-5,
atol=1e-10,
err_msg='The total costs does not have the right value',
)
assert_allclose(
- boiler.Q_th.model._investment.size.solution.item(),
+ boiler.Q_th.submodel._investment.size.solution.item(),
40,
rtol=1e-5,
atol=1e-10,
err_msg='"Boiler__Q_th__Investment_size" does not have the right value',
)
assert_allclose(
- boiler.Q_th.model._investment.is_invested.solution.item(),
+ boiler.Q_th.submodel._investment.invested.solution.item(),
1,
rtol=1e-5,
atol=1e-10,
@@ -311,74 +315,20 @@ def test_optional_invest(solver_fixture, time_steps_fixture):
)
assert_allclose(
- boiler_optional.Q_th.model._investment.size.solution.item(),
+ boiler_optional.Q_th.submodel._investment.size.solution.item(),
0,
rtol=1e-5,
atol=1e-10,
err_msg='"Boiler__Q_th__Investment_size" does not have the right value',
)
assert_allclose(
- boiler_optional.Q_th.model._investment.is_invested.solution.item(),
+ boiler_optional.Q_th.submodel._investment.invested.solution.item(),
0,
rtol=1e-5,
atol=1e-10,
err_msg='"Boiler__Q_th__IsInvested" does not have the right value',
)
- def test_fixed_relative_profile(self):
- self.flow_system = self.create_model(self.datetime_array)
- self.flow_system.add_elements(
- fx.linear_converters.Boiler(
- 'Boiler',
- 0.5,
- Q_fu=fx.Flow('Q_fu', bus=self.get_element('Gas')),
- Q_th=fx.Flow(
- 'Q_th',
- bus=self.get_element('Fernwärme'),
- size=fx.InvestParameters(optional=True, minimum_size=40, fix_effects=10, specific_effects=1),
- ),
- ),
- fx.linear_converters.Boiler(
- 'Boiler_optional',
- 0.5,
- Q_fu=fx.Flow('Q_fu', bus=self.get_element('Gas')),
- Q_th=fx.Flow(
- 'Q_th',
- bus=self.get_element('Fernwärme'),
- size=fx.InvestParameters(optional=True, minimum_size=50, fix_effects=10, specific_effects=1),
- ),
- ),
- )
- self.flow_system.add_elements(
- fx.Source(
- 'Wärmequelle',
- source=fx.Flow(
- 'Q_th',
- bus=self.get_element('Fernwärme'),
- fixed_relative_profile=np.linspace(0, 5, len(self.datetime_array)),
- size=fx.InvestParameters(optional=False, minimum_size=2, maximum_size=5),
- ),
- )
- )
- self.get_element('Fernwärme').excess_penalty_per_flow_hour = 1e5
-
- self.solve_and_load(self.flow_system)
- source = self.get_element('Wärmequelle')
- assert_allclose(
- source.source.model.flow_rate.result,
- np.linspace(0, 5, len(self.datetime_array)) * source.source.model._investment.size.result,
- rtol=self.mip_gap,
- atol=1e-10,
- err_msg='The total costs does not have the right value',
- )
- assert_allclose(
- source.source.model._investment.size.result,
- 2,
- rtol=self.mip_gap,
- atol=1e-10,
- err_msg='The total costs does not have the right value',
- )
-
def test_on(solver_fixture, time_steps_fixture):
"""Tests if the On Variable is correctly created and calculated in a Flow"""
@@ -396,7 +346,7 @@ def test_on(solver_fixture, time_steps_fixture):
boiler = flow_system.all_elements['Boiler']
costs = flow_system.effects['costs']
assert_allclose(
- costs.model.total.solution.item(),
+ costs.submodel.total.solution.item(),
80,
rtol=1e-5,
atol=1e-10,
@@ -404,14 +354,14 @@ def test_on(solver_fixture, time_steps_fixture):
)
assert_allclose(
- boiler.Q_th.model.on_off.on.solution.values,
+ boiler.Q_th.submodel.on_off.on.solution.values,
[0, 1, 1, 0, 1],
rtol=1e-5,
atol=1e-10,
err_msg='"Boiler__Q_th__on" does not have the right value',
)
assert_allclose(
- boiler.Q_th.model.flow_rate.solution.values,
+ boiler.Q_th.submodel.flow_rate.solution.values,
[0, 10, 20, 0, 10],
rtol=1e-5,
atol=1e-10,
@@ -440,7 +390,7 @@ def test_off(solver_fixture, time_steps_fixture):
boiler = flow_system.all_elements['Boiler']
costs = flow_system.effects['costs']
assert_allclose(
- costs.model.total.solution.item(),
+ costs.submodel.total.solution.item(),
80,
rtol=1e-5,
atol=1e-10,
@@ -448,21 +398,21 @@ def test_off(solver_fixture, time_steps_fixture):
)
assert_allclose(
- boiler.Q_th.model.on_off.on.solution.values,
+ boiler.Q_th.submodel.on_off.on.solution.values,
[0, 1, 1, 0, 1],
rtol=1e-5,
atol=1e-10,
err_msg='"Boiler__Q_th__on" does not have the right value',
)
assert_allclose(
- boiler.Q_th.model.on_off.off.solution.values,
- 1 - boiler.Q_th.model.on_off.on.solution.values,
+ boiler.Q_th.submodel.on_off.off.solution.values,
+ 1 - boiler.Q_th.submodel.on_off.on.solution.values,
rtol=1e-5,
atol=1e-10,
err_msg='"Boiler__Q_th__off" does not have the right value',
)
assert_allclose(
- boiler.Q_th.model.flow_rate.solution.values,
+ boiler.Q_th.submodel.flow_rate.solution.values,
[0, 10, 20, 0, 10],
rtol=1e-5,
atol=1e-10,
@@ -491,7 +441,7 @@ def test_switch_on_off(solver_fixture, time_steps_fixture):
boiler = flow_system.all_elements['Boiler']
costs = flow_system.effects['costs']
assert_allclose(
- costs.model.total.solution.item(),
+ costs.submodel.total.solution.item(),
80,
rtol=1e-5,
atol=1e-10,
@@ -499,28 +449,28 @@ def test_switch_on_off(solver_fixture, time_steps_fixture):
)
assert_allclose(
- boiler.Q_th.model.on_off.on.solution.values,
+ boiler.Q_th.submodel.on_off.on.solution.values,
[0, 1, 1, 0, 1],
rtol=1e-5,
atol=1e-10,
err_msg='"Boiler__Q_th__on" does not have the right value',
)
assert_allclose(
- boiler.Q_th.model.on_off.switch_on.solution.values,
+ boiler.Q_th.submodel.on_off.switch_on.solution.values,
[0, 1, 0, 0, 1],
rtol=1e-5,
atol=1e-10,
err_msg='"Boiler__Q_th__switch_on" does not have the right value',
)
assert_allclose(
- boiler.Q_th.model.on_off.switch_off.solution.values,
+ boiler.Q_th.submodel.on_off.switch_off.solution.values,
[0, 0, 0, 1, 0],
rtol=1e-5,
atol=1e-10,
err_msg='"Boiler__Q_th__switch_on" does not have the right value',
)
assert_allclose(
- boiler.Q_th.model.flow_rate.solution.values,
+ boiler.Q_th.submodel.flow_rate.solution.values,
[0, 10, 20, 0, 10],
rtol=1e-5,
atol=1e-10,
@@ -555,7 +505,7 @@ def test_on_total_max(solver_fixture, time_steps_fixture):
boiler = flow_system.all_elements['Boiler']
costs = flow_system.effects['costs']
assert_allclose(
- costs.model.total.solution.item(),
+ costs.submodel.total.solution.item(),
140,
rtol=1e-5,
atol=1e-10,
@@ -563,14 +513,14 @@ def test_on_total_max(solver_fixture, time_steps_fixture):
)
assert_allclose(
- boiler.Q_th.model.on_off.on.solution.values,
+ boiler.Q_th.submodel.on_off.on.solution.values,
[0, 0, 1, 0, 0],
rtol=1e-5,
atol=1e-10,
err_msg='"Boiler__Q_th__on" does not have the right value',
)
assert_allclose(
- boiler.Q_th.model.flow_rate.solution.values,
+ boiler.Q_th.submodel.flow_rate.solution.values,
[0, 0, 20, 0, 0],
rtol=1e-5,
atol=1e-10,
@@ -605,7 +555,7 @@ def test_on_total_bounds(solver_fixture, time_steps_fixture):
),
),
)
- flow_system.all_elements['Wärmelast'].sink.fixed_relative_profile = np.array(
+ flow_system.all_elements['Wärmelast'].inputs[0].fixed_relative_profile = np.array(
[0, 10, 20, 0, 12]
) # Else its non deterministic
@@ -614,7 +564,7 @@ def test_on_total_bounds(solver_fixture, time_steps_fixture):
boiler_backup = flow_system.all_elements['Boiler_backup']
costs = flow_system.effects['costs']
assert_allclose(
- costs.model.total.solution.item(),
+ costs.submodel.total.solution.item(),
114,
rtol=1e-5,
atol=1e-10,
@@ -622,14 +572,14 @@ def test_on_total_bounds(solver_fixture, time_steps_fixture):
)
assert_allclose(
- boiler.Q_th.model.on_off.on.solution.values,
+ boiler.Q_th.submodel.on_off.on.solution.values,
[0, 0, 1, 0, 1],
rtol=1e-5,
atol=1e-10,
err_msg='"Boiler__Q_th__on" does not have the right value',
)
assert_allclose(
- boiler.Q_th.model.flow_rate.solution.values,
+ boiler.Q_th.submodel.flow_rate.solution.values,
[0, 0, 20, 0, 12 - 1e-5],
rtol=1e-5,
atol=1e-10,
@@ -637,14 +587,14 @@ def test_on_total_bounds(solver_fixture, time_steps_fixture):
)
assert_allclose(
- sum(boiler_backup.Q_th.model.on_off.on.solution.values),
+ sum(boiler_backup.Q_th.submodel.on_off.on.solution.values),
3,
rtol=1e-5,
atol=1e-10,
err_msg='"Boiler_backup__Q_th__on" does not have the right value',
)
assert_allclose(
- boiler_backup.Q_th.model.flow_rate.solution.values,
+ boiler_backup.Q_th.submodel.flow_rate.solution.values,
[0, 10, 1.0e-05, 0, 1.0e-05],
rtol=1e-5,
atol=1e-10,
@@ -674,7 +624,7 @@ def test_consecutive_on_off(solver_fixture, time_steps_fixture):
Q_th=fx.Flow('Q_th', bus='Fernwärme', size=100),
),
)
- flow_system.all_elements['Wärmelast'].sink.fixed_relative_profile = np.array([5, 10, 20, 18, 12])
+ flow_system.all_elements['Wärmelast'].inputs[0].fixed_relative_profile = np.array([5, 10, 20, 18, 12])
# Else its non deterministic
solve_and_load(flow_system, solver_fixture)
@@ -682,7 +632,7 @@ def test_consecutive_on_off(solver_fixture, time_steps_fixture):
boiler_backup = flow_system.all_elements['Boiler_backup']
costs = flow_system.effects['costs']
assert_allclose(
- costs.model.total.solution.item(),
+ costs.submodel.total.solution.item(),
190,
rtol=1e-5,
atol=1e-10,
@@ -690,14 +640,14 @@ def test_consecutive_on_off(solver_fixture, time_steps_fixture):
)
assert_allclose(
- boiler.Q_th.model.on_off.on.solution.values,
+ boiler.Q_th.submodel.on_off.on.solution.values,
[1, 1, 0, 1, 1],
rtol=1e-5,
atol=1e-10,
err_msg='"Boiler__Q_th__on" does not have the right value',
)
assert_allclose(
- boiler.Q_th.model.flow_rate.solution.values,
+ boiler.Q_th.submodel.flow_rate.solution.values,
[5, 10, 0, 18, 12],
rtol=1e-5,
atol=1e-10,
@@ -705,7 +655,7 @@ def test_consecutive_on_off(solver_fixture, time_steps_fixture):
)
assert_allclose(
- boiler_backup.Q_th.model.flow_rate.solution.values,
+ boiler_backup.Q_th.submodel.flow_rate.solution.values,
[0, 0, 20, 0, 0],
rtol=1e-5,
atol=1e-10,
@@ -736,7 +686,7 @@ def test_consecutive_off(solver_fixture, time_steps_fixture):
),
),
)
- flow_system.all_elements['Wärmelast'].sink.fixed_relative_profile = np.array(
+ flow_system.all_elements['Wärmelast'].inputs[0].fixed_relative_profile = np.array(
[5, 0, 20, 18, 12]
) # Else its non deterministic
@@ -745,7 +695,7 @@ def test_consecutive_off(solver_fixture, time_steps_fixture):
boiler_backup = flow_system.all_elements['Boiler_backup']
costs = flow_system.effects['costs']
assert_allclose(
- costs.model.total.solution.item(),
+ costs.submodel.total.solution.item(),
110,
rtol=1e-5,
atol=1e-10,
@@ -753,21 +703,21 @@ def test_consecutive_off(solver_fixture, time_steps_fixture):
)
assert_allclose(
- boiler_backup.Q_th.model.on_off.on.solution.values,
+ boiler_backup.Q_th.submodel.on_off.on.solution.values,
[0, 0, 1, 0, 0],
rtol=1e-5,
atol=1e-10,
err_msg='"Boiler_backup__Q_th__on" does not have the right value',
)
assert_allclose(
- boiler_backup.Q_th.model.on_off.off.solution.values,
+ boiler_backup.Q_th.submodel.on_off.off.solution.values,
[1, 1, 0, 1, 1],
rtol=1e-5,
atol=1e-10,
err_msg='"Boiler_backup__Q_th__off" does not have the right value',
)
assert_allclose(
- boiler_backup.Q_th.model.flow_rate.solution.values,
+ boiler_backup.Q_th.submodel.flow_rate.solution.values,
[0, 0, 1e-5, 0, 0],
rtol=1e-5,
atol=1e-10,
@@ -775,7 +725,7 @@ def test_consecutive_off(solver_fixture, time_steps_fixture):
)
assert_allclose(
- boiler.Q_th.model.flow_rate.solution.values,
+ boiler.Q_th.submodel.flow_rate.solution.values,
[5, 0, 20 - 1e-5, 18, 12],
rtol=1e-5,
atol=1e-10,
diff --git a/tests/test_integration.py b/tests/test_integration.py
index 42fb5f0b7..6e5da63d6 100644
--- a/tests/test_integration.py
+++ b/tests/test_integration.py
@@ -20,12 +20,12 @@ def test_simple_flow_system(self, simple_flow_system, highs_solver):
# Cost assertions
assert_almost_equal_numeric(
- effects['costs'].model.total.solution.item(), 81.88394666666667, 'costs doesnt match expected value'
+ effects['costs'].submodel.total.solution.item(), 81.88394666666667, 'costs doesnt match expected value'
)
# CO2 assertions
assert_almost_equal_numeric(
- effects['CO2'].model.total.solution.item(), 255.09184, 'CO2 doesnt match expected value'
+ effects['CO2'].submodel.total.solution.item(), 255.09184, 'CO2 doesnt match expected value'
)
def test_model_components(self, simple_flow_system, highs_solver):
@@ -37,14 +37,14 @@ def test_model_components(self, simple_flow_system, highs_solver):
# Boiler assertions
assert_almost_equal_numeric(
- comps['Boiler'].Q_th.model.flow_rate.solution.values,
+ comps['Boiler'].Q_th.submodel.flow_rate.solution.values,
[0, 0, 0, 28.4864, 35, 0, 0, 0, 0],
'Q_th doesnt match expected value',
)
# CHP unit assertions
assert_almost_equal_numeric(
- comps['CHP_unit'].Q_th.model.flow_rate.solution.values,
+ comps['CHP_unit'].Q_th.submodel.flow_rate.solution.values,
[30.0, 26.66666667, 75.0, 75.0, 75.0, 20.0, 20.0, 20.0, 20.0],
'Q_th doesnt match expected value',
)
@@ -63,114 +63,11 @@ def test_results_persistence(self, simple_flow_system, highs_solver):
# Verify key variables from loaded results
assert_almost_equal_numeric(
- results.solution['costs|total'].values,
+ results.solution['costs'].values,
81.88394666666667,
'costs doesnt match expected value',
)
- assert_almost_equal_numeric(results.solution['CO2|total'].values, 255.09184, 'CO2 doesnt match expected value')
-
-
-class TestComponents:
- def test_transmission_basic(self, basic_flow_system, highs_solver):
- """Test basic transmission functionality"""
- flow_system = basic_flow_system
- flow_system.add_elements(fx.Bus('Wärme lokal'))
-
- boiler = fx.linear_converters.Boiler(
- 'Boiler', eta=0.5, Q_th=fx.Flow('Q_th', bus='Wärme lokal'), Q_fu=fx.Flow('Q_fu', bus='Gas')
- )
-
- transmission = fx.Transmission(
- 'Rohr',
- relative_losses=0.2,
- absolute_losses=20,
- in1=fx.Flow('Rohr1', 'Wärme lokal', size=fx.InvestParameters(specific_effects=5, maximum_size=1e6)),
- out1=fx.Flow('Rohr2', 'Fernwärme', size=1000),
- )
-
- flow_system.add_elements(transmission, boiler)
-
- _ = create_calculation_and_solve(flow_system, highs_solver, 'test_transmission_basic')
-
- # Assertions
- assert_almost_equal_numeric(
- transmission.in1.model.on_off.on.solution.values,
- np.array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1]),
- 'On does not work properly',
- )
-
- assert_almost_equal_numeric(
- transmission.in1.model.flow_rate.solution.values * 0.8 - 20,
- transmission.out1.model.flow_rate.solution.values,
- 'Losses are not computed correctly',
- )
-
- def test_transmission_advanced(self, basic_flow_system, highs_solver):
- """Test advanced transmission functionality"""
- flow_system = basic_flow_system
- flow_system.add_elements(fx.Bus('Wärme lokal'))
-
- boiler = fx.linear_converters.Boiler(
- 'Boiler_Standard',
- eta=0.9,
- Q_th=fx.Flow('Q_th', bus='Fernwärme', relative_maximum=np.array([0, 0, 0, 1, 1, 1, 1, 1, 1, 1])),
- Q_fu=fx.Flow('Q_fu', bus='Gas'),
- )
-
- boiler2 = fx.linear_converters.Boiler(
- 'Boiler_backup', eta=0.4, Q_th=fx.Flow('Q_th', bus='Wärme lokal'), Q_fu=fx.Flow('Q_fu', bus='Gas')
- )
-
- last2 = fx.Sink(
- 'Wärmelast2',
- sink=fx.Flow(
- 'Q_th_Last',
- bus='Wärme lokal',
- size=1,
- fixed_relative_profile=flow_system.components['Wärmelast'].sink.fixed_relative_profile
- * np.array([0, 0, 0, 0, 0, 1, 1, 1, 1, 1]),
- ),
- )
-
- transmission = fx.Transmission(
- 'Rohr',
- relative_losses=0.2,
- absolute_losses=20,
- in1=fx.Flow('Rohr1a', bus='Wärme lokal', size=fx.InvestParameters(specific_effects=5, maximum_size=1000)),
- out1=fx.Flow('Rohr1b', 'Fernwärme', size=1000),
- in2=fx.Flow('Rohr2a', 'Fernwärme', size=1000),
- out2=fx.Flow('Rohr2b', bus='Wärme lokal', size=1000),
- )
-
- flow_system.add_elements(transmission, boiler, boiler2, last2)
-
- calculation = create_calculation_and_solve(flow_system, highs_solver, 'test_transmission_advanced')
-
- # Assertions
- assert_almost_equal_numeric(
- transmission.in1.model.on_off.on.solution.values,
- np.array([1, 1, 1, 0, 0, 0, 0, 0, 0, 0]),
- 'On does not work properly',
- )
-
- assert_almost_equal_numeric(
- calculation.results.model.variables['Rohr(Rohr1b)|flow_rate'].solution.values,
- transmission.out1.model.flow_rate.solution.values,
- 'Flow rate of Rohr__Rohr1b is not correct',
- )
-
- assert_almost_equal_numeric(
- transmission.in1.model.flow_rate.solution.values * 0.8
- - np.array([20 if val > 0.1 else 0 for val in transmission.in1.model.flow_rate.solution.values]),
- transmission.out1.model.flow_rate.solution.values,
- 'Losses are not computed correctly',
- )
-
- assert_almost_equal_numeric(
- transmission.in1.model._investment.size.solution.item(),
- transmission.in2.model._investment.size.solution.item(),
- 'The Investments are not equated correctly',
- )
+ assert_almost_equal_numeric(results.solution['CO2'].values, 255.09184, 'CO2 doesnt match expected value')
class TestComplex:
@@ -179,13 +76,13 @@ def test_basic_flow_system(self, flow_system_base, highs_solver):
# Assertions
assert_almost_equal_numeric(
- calculation.results.model['costs|total'].solution.item(),
+ calculation.results.model['costs'].solution.item(),
-11597.873624489237,
'costs doesnt match expected value',
)
assert_almost_equal_numeric(
- calculation.results.model['costs(operation)|total_per_timestep'].solution.values,
+ calculation.results.model['costs(temporal)|per_timestep'].solution.values,
[
-2.38500000e03,
-2.21681333e03,
@@ -201,55 +98,55 @@ def test_basic_flow_system(self, flow_system_base, highs_solver):
)
assert_almost_equal_numeric(
- sum(calculation.results.model['CO2(operation)->costs(operation)'].solution.values),
+ sum(calculation.results.model['CO2(temporal)->costs(temporal)'].solution.values),
258.63729669618675,
'costs doesnt match expected value',
)
assert_almost_equal_numeric(
- sum(calculation.results.model['Kessel(Q_th)->costs(operation)'].solution.values),
+ sum(calculation.results.model['Kessel(Q_th)->costs(temporal)'].solution.values),
0.01,
'costs doesnt match expected value',
)
assert_almost_equal_numeric(
- sum(calculation.results.model['Kessel->costs(operation)'].solution.values),
+ sum(calculation.results.model['Kessel->costs(temporal)'].solution.values),
-0.0,
'costs doesnt match expected value',
)
assert_almost_equal_numeric(
- sum(calculation.results.model['Gastarif(Q_Gas)->costs(operation)'].solution.values),
+ sum(calculation.results.model['Gastarif(Q_Gas)->costs(temporal)'].solution.values),
39.09153113079115,
'costs doesnt match expected value',
)
assert_almost_equal_numeric(
- sum(calculation.results.model['Einspeisung(P_el)->costs(operation)'].solution.values),
+ sum(calculation.results.model['Einspeisung(P_el)->costs(temporal)'].solution.values),
-14196.61245231646,
'costs doesnt match expected value',
)
assert_almost_equal_numeric(
- sum(calculation.results.model['KWK->costs(operation)'].solution.values),
+ sum(calculation.results.model['KWK->costs(temporal)'].solution.values),
0.0,
'costs doesnt match expected value',
)
assert_almost_equal_numeric(
- calculation.results.model['Kessel(Q_th)->costs(invest)'].solution.values,
+ calculation.results.model['Kessel(Q_th)->costs(periodic)'].solution.values,
1000 + 500,
'costs doesnt match expected value',
)
assert_almost_equal_numeric(
- calculation.results.model['Speicher->costs(invest)'].solution.values,
+ calculation.results.model['Speicher->costs(periodic)'].solution.values,
800 + 1,
'costs doesnt match expected value',
)
assert_almost_equal_numeric(
- calculation.results.model['CO2(operation)|total'].solution.values,
+ calculation.results.model['CO2(temporal)'].solution.values,
1293.1864834809337,
'CO2 doesnt match expected value',
)
assert_almost_equal_numeric(
- calculation.results.model['CO2(invest)|total'].solution.values,
+ calculation.results.model['CO2(periodic)'].solution.values,
0.9999999999999994,
'CO2 doesnt match expected value',
)
@@ -317,38 +214,38 @@ def test_piecewise_conversion(self, flow_system_piecewise_conversion, highs_solv
# Compare expected values with actual values
assert_almost_equal_numeric(
- effects['costs'].model.total.solution.item(), -10710.997365760755, 'costs doesnt match expected value'
+ effects['costs'].submodel.total.solution.item(), -10710.997365760755, 'costs doesnt match expected value'
)
assert_almost_equal_numeric(
- effects['CO2'].model.total.solution.item(), 1278.7939026086956, 'CO2 doesnt match expected value'
+ effects['CO2'].submodel.total.solution.item(), 1278.7939026086956, 'CO2 doesnt match expected value'
)
assert_almost_equal_numeric(
- comps['Kessel'].Q_th.model.flow_rate.solution.values,
+ comps['Kessel'].Q_th.submodel.flow_rate.solution.values,
[0, 0, 0, 45, 0, 0, 0, 0, 0],
'Kessel doesnt match expected value',
)
kwk_flows = {flow.label: flow for flow in comps['KWK'].inputs + comps['KWK'].outputs}
assert_almost_equal_numeric(
- kwk_flows['Q_th'].model.flow_rate.solution.values,
+ kwk_flows['Q_th'].submodel.flow_rate.solution.values,
[45.0, 45.0, 64.5962087, 100.0, 61.3136, 45.0, 45.0, 12.86469565, 0.0],
'KWK Q_th doesnt match expected value',
)
assert_almost_equal_numeric(
- kwk_flows['P_el'].model.flow_rate.solution.values,
+ kwk_flows['P_el'].submodel.flow_rate.solution.values,
[40.0, 40.0, 47.12589407, 60.0, 45.93221818, 40.0, 40.0, 10.91784108, -0.0],
'KWK P_el doesnt match expected value',
)
assert_almost_equal_numeric(
- comps['Speicher'].model.netto_discharge.solution.values,
+ comps['Speicher'].submodel.netto_discharge.solution.values,
[-15.0, -45.0, 25.4037913, -35.0, 48.6864, -25.0, -25.0, 7.13530435, 20.0],
'Speicher nettoFlow doesnt match expected value',
)
assert_almost_equal_numeric(
- comps['Speicher'].model.variables['Speicher|PiecewiseEffects|costs'].solution.values,
+ comps['Speicher'].submodel.variables['Speicher|PiecewiseEffects|costs'].solution.values,
454.74666666666667,
- 'Speicher investCosts_segmented_costs doesnt match expected value',
+ 'Speicher investcosts_segmented_costs doesnt match expected value',
)
@@ -407,17 +304,23 @@ def test_modeling_types_costs(self, modeling_calculation):
if modeling_type in ['full', 'aggregated']:
assert_almost_equal_numeric(
- calc.results.model['costs|total'].solution.item(),
+ calc.results.model['costs'].solution.item(),
expected_costs[modeling_type],
- f'Costs do not match for {modeling_type} modeling type',
+ f'costs do not match for {modeling_type} modeling type',
)
else:
assert_almost_equal_numeric(
- calc.results.solution_without_overlap('costs(operation)|total_per_timestep').sum(),
+ calc.results.solution_without_overlap('costs(temporal)|per_timestep').sum(),
expected_costs[modeling_type],
- f'Costs do not match for {modeling_type} modeling type',
+ f'costs do not match for {modeling_type} modeling type',
)
+ def test_segmented_io(self, modeling_calculation):
+ calc, modeling_type = modeling_calculation
+ if modeling_type == 'segmented':
+ calc.results.to_file()
+ _ = fx.results.SegmentedCalculationResults.from_file(calc.folder, calc.name)
+
if __name__ == '__main__':
pytest.main(['-v'])
diff --git a/tests/test_invest_parameters_deprecation.py b/tests/test_invest_parameters_deprecation.py
new file mode 100644
index 000000000..438d7f4b8
--- /dev/null
+++ b/tests/test_invest_parameters_deprecation.py
@@ -0,0 +1,344 @@
+"""
+Test backward compatibility and deprecation warnings for InvestParameters.
+
+This test verifies that:
+1. Old parameter names (fix_effects, specific_effects, divest_effects, piecewise_effects) still work with warnings
+2. New parameter names (effects_of_investment, effects_of_investment_per_size, effects_of_retirement, piecewise_effects_of_investment) work correctly
+3. Both old and new approaches produce equivalent results
+"""
+
+import warnings
+
+import pytest
+
+from flixopt.interface import InvestParameters
+
+
+class TestInvestParametersDeprecation:
+ """Test suite for InvestParameters parameter deprecation."""
+
+ def test_new_parameters_no_warnings(self):
+ """Test that new parameter names don't trigger warnings."""
+ with warnings.catch_warnings():
+ warnings.simplefilter('error', DeprecationWarning)
+ # Should not raise DeprecationWarning
+ params = InvestParameters(
+ fixed_size=100,
+ effects_of_investment={'cost': 25000},
+ effects_of_investment_per_size={'cost': 1200},
+ effects_of_retirement={'cost': 5000},
+ )
+ assert params.effects_of_investment == {'cost': 25000}
+ assert params.effects_of_investment_per_size == {'cost': 1200}
+ assert params.effects_of_retirement == {'cost': 5000}
+
+ def test_old_fix_effects_deprecation_warning(self):
+ """Test that fix_effects triggers deprecation warning."""
+ with pytest.warns(DeprecationWarning, match='fix_effects.*deprecated.*effects_of_investment'):
+ params = InvestParameters(fix_effects={'cost': 25000})
+ # Verify backward compatibility
+ assert params.effects_of_investment == {'cost': 25000}
+
+ # Accessing the property also triggers warning
+ with pytest.warns(DeprecationWarning, match='fix_effects.*deprecated.*effects_of_investment'):
+ assert params.fix_effects == {'cost': 25000}
+
+ def test_old_specific_effects_deprecation_warning(self):
+ """Test that specific_effects triggers deprecation warning."""
+ with pytest.warns(DeprecationWarning, match='specific_effects.*deprecated.*effects_of_investment_per_size'):
+ params = InvestParameters(specific_effects={'cost': 1200})
+ # Verify backward compatibility
+ assert params.effects_of_investment_per_size == {'cost': 1200}
+
+ # Accessing the property also triggers warning
+ with pytest.warns(DeprecationWarning, match='specific_effects.*deprecated.*effects_of_investment_per_size'):
+ assert params.specific_effects == {'cost': 1200}
+
+ def test_old_divest_effects_deprecation_warning(self):
+ """Test that divest_effects triggers deprecation warning."""
+ with pytest.warns(DeprecationWarning, match='divest_effects.*deprecated.*effects_of_retirement'):
+ params = InvestParameters(divest_effects={'cost': 5000})
+ # Verify backward compatibility
+ assert params.effects_of_retirement == {'cost': 5000}
+
+ # Accessing the property also triggers warning
+ with pytest.warns(DeprecationWarning, match='divest_effects.*deprecated.*effects_of_retirement'):
+ assert params.divest_effects == {'cost': 5000}
+
+ def test_old_piecewise_effects_deprecation_warning(self):
+ """Test that piecewise_effects triggers deprecation warning."""
+ from flixopt.interface import Piece, Piecewise, PiecewiseEffects
+
+ test_piecewise = PiecewiseEffects(
+ piecewise_origin=Piecewise([Piece(0, 100)]),
+ piecewise_shares={'cost': Piecewise([Piece(800, 600)])},
+ )
+ with pytest.warns(DeprecationWarning, match='piecewise_effects.*deprecated.*piecewise_effects_of_investment'):
+ params = InvestParameters(piecewise_effects=test_piecewise)
+ # Verify backward compatibility
+ assert params.piecewise_effects_of_investment is test_piecewise
+
+ # Accessing the property also triggers warning
+ with pytest.warns(DeprecationWarning, match='piecewise_effects.*deprecated.*piecewise_effects_of_investment'):
+ assert params.piecewise_effects is test_piecewise
+
+ def test_all_old_parameters_together(self):
+ """Test all old parameters work together with warnings."""
+ from flixopt.interface import Piece, Piecewise, PiecewiseEffects
+
+ test_piecewise = PiecewiseEffects(
+ piecewise_origin=Piecewise([Piece(0, 100)]),
+ piecewise_shares={'cost': Piecewise([Piece(800, 600)])},
+ )
+ with warnings.catch_warnings(record=True) as w:
+ warnings.simplefilter('always', DeprecationWarning)
+ params = InvestParameters(
+ fixed_size=100,
+ fix_effects={'cost': 25000},
+ specific_effects={'cost': 1200},
+ divest_effects={'cost': 5000},
+ piecewise_effects=test_piecewise,
+ )
+
+ # Should trigger 4 deprecation warnings (from kwargs)
+ assert len([warning for warning in w if issubclass(warning.category, DeprecationWarning)]) == 4
+
+ # Verify all mappings work (accessing new properties - no warnings)
+ assert params.effects_of_investment == {'cost': 25000}
+ assert params.effects_of_investment_per_size == {'cost': 1200}
+ assert params.effects_of_retirement == {'cost': 5000}
+ assert params.piecewise_effects_of_investment is test_piecewise
+
+ # Verify old attributes still work (accessing deprecated properties - triggers warnings)
+ with pytest.warns(DeprecationWarning):
+ assert params.fix_effects == {'cost': 25000}
+ with pytest.warns(DeprecationWarning):
+ assert params.specific_effects == {'cost': 1200}
+ with pytest.warns(DeprecationWarning):
+ assert params.divest_effects == {'cost': 5000}
+ with pytest.warns(DeprecationWarning):
+ assert params.piecewise_effects is test_piecewise
+
+ def test_both_old_and_new_raises_error(self):
+ """Test that specifying both old and new parameter names raises ValueError."""
+ # fix_effects + effects_of_investment
+ with pytest.raises(
+ ValueError, match='Either fix_effects or effects_of_investment can be specified, but not both'
+ ):
+ InvestParameters(
+ fix_effects={'cost': 10000},
+ effects_of_investment={'cost': 25000},
+ )
+
+ # specific_effects + effects_of_investment_per_size
+ with pytest.raises(
+ ValueError,
+ match='Either specific_effects or effects_of_investment_per_size can be specified, but not both',
+ ):
+ InvestParameters(
+ specific_effects={'cost': 1200},
+ effects_of_investment_per_size={'cost': 1500},
+ )
+
+ # divest_effects + effects_of_retirement
+ with pytest.raises(
+ ValueError, match='Either divest_effects or effects_of_retirement can be specified, but not both'
+ ):
+ InvestParameters(
+ divest_effects={'cost': 5000},
+ effects_of_retirement={'cost': 6000},
+ )
+
+ # piecewise_effects + piecewise_effects_of_investment
+ from flixopt.interface import Piece, Piecewise, PiecewiseEffects
+
+ test_piecewise1 = PiecewiseEffects(
+ piecewise_origin=Piecewise([Piece(0, 100)]),
+ piecewise_shares={'cost': Piecewise([Piece(800, 600)])},
+ )
+ test_piecewise2 = PiecewiseEffects(
+ piecewise_origin=Piecewise([Piece(0, 200)]),
+ piecewise_shares={'cost': Piecewise([Piece(900, 700)])},
+ )
+ with pytest.raises(
+ ValueError,
+ match='Either piecewise_effects or piecewise_effects_of_investment can be specified, but not both',
+ ):
+ InvestParameters(
+ piecewise_effects=test_piecewise1,
+ piecewise_effects_of_investment=test_piecewise2,
+ )
+
+ def test_piecewise_effects_of_investment_new_parameter(self):
+ """Test that piecewise_effects_of_investment works correctly."""
+ from flixopt.interface import Piece, Piecewise, PiecewiseEffects
+
+ test_piecewise = PiecewiseEffects(
+ piecewise_origin=Piecewise([Piece(0, 100)]),
+ piecewise_shares={'cost': Piecewise([Piece(800, 600)])},
+ )
+
+ with warnings.catch_warnings():
+ warnings.simplefilter('error', DeprecationWarning)
+ # Should not raise DeprecationWarning when using new parameter
+ params = InvestParameters(piecewise_effects_of_investment=test_piecewise)
+ assert params.piecewise_effects_of_investment is test_piecewise
+
+ # Accessing deprecated property triggers warning
+ with pytest.warns(DeprecationWarning):
+ assert params.piecewise_effects is test_piecewise
+
+ def test_backward_compatibility_with_features(self):
+ """Test that old attribute names remain accessible for features.py compatibility."""
+ from flixopt.interface import Piece, Piecewise, PiecewiseEffects
+
+ test_piecewise = PiecewiseEffects(
+ piecewise_origin=Piecewise([Piece(0, 100)]),
+ piecewise_shares={'cost': Piecewise([Piece(800, 600)])},
+ )
+
+ params = InvestParameters(
+ effects_of_investment={'cost': 25000},
+ effects_of_investment_per_size={'cost': 1200},
+ effects_of_retirement={'cost': 5000},
+ piecewise_effects_of_investment=test_piecewise,
+ )
+
+ # Old properties should still be accessible (for features.py) but with warnings
+ with pytest.warns(DeprecationWarning):
+ assert params.fix_effects == {'cost': 25000}
+ with pytest.warns(DeprecationWarning):
+ assert params.specific_effects == {'cost': 1200}
+ with pytest.warns(DeprecationWarning):
+ assert params.divest_effects == {'cost': 5000}
+ with pytest.warns(DeprecationWarning):
+ assert params.piecewise_effects is test_piecewise
+
+ # Properties should return the same objects as the new attributes
+ with pytest.warns(DeprecationWarning):
+ assert params.fix_effects is params.effects_of_investment
+ with pytest.warns(DeprecationWarning):
+ assert params.specific_effects is params.effects_of_investment_per_size
+ with pytest.warns(DeprecationWarning):
+ assert params.divest_effects is params.effects_of_retirement
+ with pytest.warns(DeprecationWarning):
+ assert params.piecewise_effects is params.piecewise_effects_of_investment
+
+ def test_empty_parameters(self):
+ """Test that empty/None parameters work correctly."""
+ params = InvestParameters()
+
+ assert params.effects_of_investment == {}
+ assert params.effects_of_investment_per_size == {}
+ assert params.effects_of_retirement == {}
+ assert params.piecewise_effects_of_investment is None
+
+ # Old properties should also be empty (but with warnings)
+ with pytest.warns(DeprecationWarning):
+ assert params.fix_effects == {}
+ with pytest.warns(DeprecationWarning):
+ assert params.specific_effects == {}
+ with pytest.warns(DeprecationWarning):
+ assert params.divest_effects == {}
+ with pytest.warns(DeprecationWarning):
+ assert params.piecewise_effects is None
+
+ def test_mixed_old_and_new_parameters(self):
+ """Test mixing old and new parameter names (not recommended but should work)."""
+ with warnings.catch_warnings(record=True) as w:
+ warnings.simplefilter('always', DeprecationWarning)
+ params = InvestParameters(
+ effects_of_investment={'cost': 25000}, # New
+ specific_effects={'cost': 1200}, # Old
+ effects_of_retirement={'cost': 5000}, # New
+ )
+
+ # Should only warn about the old parameter
+ assert len([warning for warning in w if issubclass(warning.category, DeprecationWarning)]) == 1
+
+ # All should work correctly
+ assert params.effects_of_investment == {'cost': 25000}
+ assert params.effects_of_investment_per_size == {'cost': 1200}
+ assert params.effects_of_retirement == {'cost': 5000}
+
+ def test_unexpected_keyword_arguments(self):
+ """Test that unexpected keyword arguments raise TypeError."""
+ # Single unexpected argument
+ with pytest.raises(
+ TypeError, match="InvestParameters.__init__\\(\\) got unexpected keyword argument\\(s\\): 'invalid_param'"
+ ):
+ InvestParameters(invalid_param='value')
+
+ # Multiple unexpected arguments
+ with pytest.raises(
+ TypeError,
+ match="InvestParameters.__init__\\(\\) got unexpected keyword argument\\(s\\): 'param1', 'param2'",
+ ):
+ InvestParameters(param1='value1', param2='value2')
+
+ # Mix of valid and invalid arguments
+ with pytest.raises(
+ TypeError, match="InvestParameters.__init__\\(\\) got unexpected keyword argument\\(s\\): 'typo'"
+ ):
+ InvestParameters(effects_of_investment={'cost': 100}, typo='value')
+
+ def test_optional_parameter_deprecation(self):
+ """Test that optional parameter triggers deprecation warning and maps to mandatory."""
+ # Test optional=True (should map to mandatory=False)
+ with pytest.warns(DeprecationWarning, match='optional.*deprecated.*mandatory'):
+ params = InvestParameters(optional=True)
+ assert params.mandatory is False
+
+ # Test optional=False (should map to mandatory=True)
+ with pytest.warns(DeprecationWarning, match='optional.*deprecated.*mandatory'):
+ params = InvestParameters(optional=False)
+ assert params.mandatory is True
+
+ def test_mandatory_parameter_no_warning(self):
+ """Test that mandatory parameter doesn't trigger warnings."""
+ with warnings.catch_warnings():
+ warnings.simplefilter('error', DeprecationWarning)
+ # Test mandatory=True
+ params = InvestParameters(mandatory=True)
+ assert params.mandatory is True
+
+ # Test mandatory=False (explicit)
+ params = InvestParameters(mandatory=False)
+ assert params.mandatory is False
+
+ def test_mandatory_default_value(self):
+ """Test that default value of mandatory is False when neither optional nor mandatory is specified."""
+ params = InvestParameters()
+ assert params.mandatory is False
+
+ def test_both_optional_and_mandatory_no_error(self):
+ """Test that specifying both optional and mandatory doesn't raise error.
+
+ Note: Conflict checking is disabled for mandatory/optional because mandatory has
+ a non-None default value (False), making it impossible to distinguish between
+ an explicit mandatory=False and the default value. The deprecated optional
+ parameter will take precedence when both are specified.
+ """
+ # When both are specified, optional takes precedence (with deprecation warning)
+ with pytest.warns(DeprecationWarning, match='optional.*deprecated.*mandatory'):
+ params = InvestParameters(optional=True, mandatory=False)
+ # optional=True should result in mandatory=False
+ assert params.mandatory is False
+
+ with pytest.warns(DeprecationWarning, match='optional.*deprecated.*mandatory'):
+ params = InvestParameters(optional=False, mandatory=True)
+ # optional=False should result in mandatory=True (optional takes precedence)
+ assert params.mandatory is True
+
+ def test_optional_property_deprecation(self):
+ """Test that accessing optional property triggers deprecation warning."""
+ params = InvestParameters(mandatory=True)
+
+ # Reading the property triggers warning
+ with pytest.warns(DeprecationWarning, match="Property 'optional' is deprecated"):
+ assert params.optional is False
+
+ # Setting the property triggers warning
+ with pytest.warns(DeprecationWarning, match="Property 'optional' is deprecated"):
+ params.optional = True
+ assert params.mandatory is False
diff --git a/tests/test_io.py b/tests/test_io.py
index 2ec74955f..f5ca2174a 100644
--- a/tests/test_io.py
+++ b/tests/test_io.py
@@ -9,10 +9,19 @@
flow_system_long,
flow_system_segments_of_flows_2,
simple_flow_system,
+ simple_flow_system_scenarios,
)
-@pytest.fixture(params=[flow_system_base, flow_system_segments_of_flows_2, simple_flow_system, flow_system_long])
+@pytest.fixture(
+ params=[
+ flow_system_base,
+ simple_flow_system_scenarios,
+ flow_system_segments_of_flows_2,
+ simple_flow_system,
+ flow_system_long,
+ ]
+)
def flow_system(request):
fs = request.getfixturevalue(request.param.__name__)
if isinstance(fs, fx.FlowSystem):
@@ -26,6 +35,7 @@ def test_flow_system_file_io(flow_system, highs_solver):
calculation_0 = fx.FullCalculation('IO', flow_system=flow_system)
calculation_0.do_modeling()
calculation_0.solve(highs_solver)
+ calculation_0.flow_system.plot_network()
calculation_0.results.to_file()
paths = CalculationResultsPaths(calculation_0.folder, calculation_0.name)
@@ -34,6 +44,7 @@ def test_flow_system_file_io(flow_system, highs_solver):
calculation_1 = fx.FullCalculation('Loaded_IO', flow_system=flow_system_1)
calculation_1.do_modeling()
calculation_1.solve(highs_solver)
+ calculation_1.flow_system.plot_network()
assert_almost_equal_numeric(
calculation_0.results.model.objective.value,
@@ -42,18 +53,19 @@ def test_flow_system_file_io(flow_system, highs_solver):
)
assert_almost_equal_numeric(
- calculation_0.results.solution['costs|total'].values,
- calculation_1.results.solution['costs|total'].values,
+ calculation_0.results.solution['costs'].values,
+ calculation_1.results.solution['costs'].values,
'costs doesnt match expected value',
)
def test_flow_system_io(flow_system):
- di = flow_system.as_dict()
- _ = fx.FlowSystem.from_dict(di)
+ flow_system.to_json('fs.json')
+
+ ds = flow_system.to_dataset()
+ new_fs = fx.FlowSystem.from_dataset(ds)
- ds = flow_system.as_dataset()
- _ = fx.FlowSystem.from_dataset(ds)
+ assert flow_system == new_fs
print(flow_system)
flow_system.__repr__()
diff --git a/tests/test_linear_converter.py b/tests/test_linear_converter.py
index 93ace3e78..1884c8d72 100644
--- a/tests/test_linear_converter.py
+++ b/tests/test_linear_converter.py
@@ -10,9 +10,9 @@
class TestLinearConverterModel:
"""Test the LinearConverterModel class."""
- def test_basic_linear_converter(self, basic_flow_system_linopy):
+ def test_basic_linear_converter(self, basic_flow_system_linopy_coords, coords_config):
"""Test basic initialization and modeling of a LinearConverter."""
- flow_system = basic_flow_system_linopy
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
# Create input and output flows
input_flow = fx.Flow('input', bus='input_bus', size=100)
@@ -40,13 +40,13 @@ def test_basic_linear_converter(self, basic_flow_system_linopy):
# Check conversion constraint (input * 0.8 == output * 1.0)
assert_conequal(
model.constraints['Converter|conversion_0'],
- input_flow.model.flow_rate * 0.8 == output_flow.model.flow_rate * 1.0,
+ input_flow.submodel.flow_rate * 0.8 == output_flow.submodel.flow_rate * 1.0,
)
- def test_linear_converter_time_varying(self, basic_flow_system_linopy):
+ def test_linear_converter_time_varying(self, basic_flow_system_linopy_coords, coords_config):
"""Test a LinearConverter with time-varying conversion factors."""
- flow_system = basic_flow_system_linopy
- timesteps = flow_system.time_series_collection.timesteps
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
+ timesteps = flow_system.timesteps
# Create time-varying efficiency (e.g., temperature-dependent)
varying_efficiency = np.linspace(0.7, 0.9, len(timesteps))
@@ -78,12 +78,12 @@ def test_linear_converter_time_varying(self, basic_flow_system_linopy):
# Check conversion constraint (input * efficiency_series == output * 1.0)
assert_conequal(
model.constraints['Converter|conversion_0'],
- input_flow.model.flow_rate * efficiency_series == output_flow.model.flow_rate * 1.0,
+ input_flow.submodel.flow_rate * efficiency_series == output_flow.submodel.flow_rate * 1.0,
)
- def test_linear_converter_multiple_factors(self, basic_flow_system_linopy):
+ def test_linear_converter_multiple_factors(self, basic_flow_system_linopy_coords, coords_config):
"""Test a LinearConverter with multiple conversion factors."""
- flow_system = basic_flow_system_linopy
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
# Create flows
input_flow1 = fx.Flow('input1', bus='input_bus1', size=100)
@@ -119,24 +119,24 @@ def test_linear_converter_multiple_factors(self, basic_flow_system_linopy):
# Check conversion constraint 1 (input1 * 0.8 == output1 * 1.0)
assert_conequal(
model.constraints['Converter|conversion_0'],
- input_flow1.model.flow_rate * 0.8 == output_flow1.model.flow_rate * 1.0,
+ input_flow1.submodel.flow_rate * 0.8 == output_flow1.submodel.flow_rate * 1.0,
)
# Check conversion constraint 2 (input2 * 0.5 == output2 * 1.0)
assert_conequal(
model.constraints['Converter|conversion_1'],
- input_flow2.model.flow_rate * 0.5 == output_flow2.model.flow_rate * 1.0,
+ input_flow2.submodel.flow_rate * 0.5 == output_flow2.submodel.flow_rate * 1.0,
)
# Check conversion constraint 3 (input1 * 0.2 == output2 * 0.3)
assert_conequal(
model.constraints['Converter|conversion_2'],
- input_flow1.model.flow_rate * 0.2 == output_flow2.model.flow_rate * 0.3,
+ input_flow1.submodel.flow_rate * 0.2 == output_flow2.submodel.flow_rate * 0.3,
)
- def test_linear_converter_with_on_off(self, basic_flow_system_linopy):
+ def test_linear_converter_with_on_off(self, basic_flow_system_linopy_coords, coords_config):
"""Test a LinearConverter with OnOffParameters."""
- flow_system = basic_flow_system_linopy
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
# Create input and output flows
input_flow = fx.Flow('input', bus='input_bus', size=100)
@@ -144,7 +144,7 @@ def test_linear_converter_with_on_off(self, basic_flow_system_linopy):
# Create OnOffParameters
on_off_params = fx.OnOffParameters(
- on_hours_total_min=10, on_hours_total_max=40, effects_per_running_hour={'Costs': 5}
+ on_hours_total_min=10, on_hours_total_max=40, effects_per_running_hour={'costs': 5}
)
# Create a linear converter with OnOffParameters
@@ -173,27 +173,26 @@ def test_linear_converter_with_on_off(self, basic_flow_system_linopy):
# Check on_hours_total constraint
assert_conequal(
model.constraints['Converter|on_hours_total'],
- converter.model.on_off.variables['Converter|on_hours_total']
- == (converter.model.on_off.variables['Converter|on'] * model.hours_per_step).sum(),
+ model.variables['Converter|on_hours_total']
+ == (model.variables['Converter|on'] * model.hours_per_step).sum('time'),
)
# Check conversion constraint
assert_conequal(
model.constraints['Converter|conversion_0'],
- input_flow.model.flow_rate * 0.8 == output_flow.model.flow_rate * 1.0,
+ input_flow.submodel.flow_rate * 0.8 == output_flow.submodel.flow_rate * 1.0,
)
# Check on_off effects
- assert 'Converter->Costs(operation)' in model.constraints
+ assert 'Converter->costs(temporal)' in model.constraints
assert_conequal(
- model.constraints['Converter->Costs(operation)'],
- model.variables['Converter->Costs(operation)']
- == converter.model.on_off.variables['Converter|on'] * model.hours_per_step * 5,
+ model.constraints['Converter->costs(temporal)'],
+ model.variables['Converter->costs(temporal)'] == model.variables['Converter|on'] * model.hours_per_step * 5,
)
- def test_linear_converter_multidimensional(self, basic_flow_system_linopy):
+ def test_linear_converter_multidimensional(self, basic_flow_system_linopy_coords, coords_config):
"""Test LinearConverter with multiple inputs, outputs, and connections between them."""
- flow_system = basic_flow_system_linopy
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
# Create a more complex setup with multiple flows
input_flow1 = fx.Flow('fuel', bus='fuel_bus', size=100)
@@ -232,23 +231,23 @@ def test_linear_converter_multidimensional(self, basic_flow_system_linopy):
# Check the conversion equations
assert_conequal(
model.constraints['MultiConverter|conversion_0'],
- input_flow1.model.flow_rate * 0.7 == output_flow1.model.flow_rate * 1.0,
+ input_flow1.submodel.flow_rate * 0.7 == output_flow1.submodel.flow_rate * 1.0,
)
assert_conequal(
model.constraints['MultiConverter|conversion_1'],
- input_flow2.model.flow_rate * 0.3 == output_flow2.model.flow_rate * 1.0,
+ input_flow2.submodel.flow_rate * 0.3 == output_flow2.submodel.flow_rate * 1.0,
)
assert_conequal(
model.constraints['MultiConverter|conversion_2'],
- input_flow1.model.flow_rate * 0.1 == output_flow2.model.flow_rate * 0.5,
+ input_flow1.submodel.flow_rate * 0.1 == output_flow2.submodel.flow_rate * 0.5,
)
- def test_edge_case_time_varying_conversion(self, basic_flow_system_linopy):
+ def test_edge_case_time_varying_conversion(self, basic_flow_system_linopy_coords, coords_config):
"""Test edge case with extreme time-varying conversion factors."""
- flow_system = basic_flow_system_linopy
- timesteps = flow_system.time_series_collection.timesteps
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
+ timesteps = flow_system.timesteps
# Create fluctuating conversion efficiency (e.g., for a heat pump)
# Values range from very low (0.1) to very high (5.0)
@@ -280,16 +279,19 @@ def test_edge_case_time_varying_conversion(self, basic_flow_system_linopy):
# Check that the correct constraint was created
assert 'VariableConverter|conversion_0' in model.constraints
+ factor = converter.conversion_factors[0]['electricity']
+
+ assert factor.dims == tuple(model.get_coords())
+
# Verify the constraint has the time-varying coefficient
assert_conequal(
model.constraints['VariableConverter|conversion_0'],
- input_flow.model.flow_rate * fluctuating_cop == output_flow.model.flow_rate * 1.0,
+ input_flow.submodel.flow_rate * factor == output_flow.submodel.flow_rate * 1.0,
)
- def test_piecewise_conversion(self, basic_flow_system_linopy):
+ def test_piecewise_conversion(self, basic_flow_system_linopy_coords, coords_config):
"""Test a LinearConverter with PiecewiseConversion."""
- flow_system = basic_flow_system_linopy
- timesteps = flow_system.time_series_collection.timesteps
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
# Create input and output flows
input_flow = fx.Flow('input', bus='input_bus', size=100)
@@ -318,11 +320,11 @@ def test_piecewise_conversion(self, basic_flow_system_linopy):
# Create model with the piecewise conversion
model = create_linopy_model(flow_system)
- # Verify that PiecewiseModel was created and added as a sub_model
- assert converter.model.piecewise_conversion is not None
+ # Verify that PiecewiseModel was created and added as a submodel
+ assert converter.submodel.piecewise_conversion is not None
# Get the PiecewiseModel instance
- piecewise_model = converter.model.piecewise_conversion
+ piecewise_model = converter.submodel.piecewise_conversion
# Check that we have the expected pieces (2 in this case)
assert len(piecewise_model.pieces) == 2
@@ -337,9 +339,9 @@ def test_piecewise_conversion(self, basic_flow_system_linopy):
lambda1 = model.variables[f'Converter|Piece_{i}|lambda1']
inside_piece = model.variables[f'Converter|Piece_{i}|inside_piece']
- assert_var_equal(inside_piece, model.add_variables(binary=True, coords=(timesteps,)))
- assert_var_equal(lambda0, model.add_variables(lower=0, upper=1, coords=(timesteps,)))
- assert_var_equal(lambda1, model.add_variables(lower=0, upper=1, coords=(timesteps,)))
+ assert_var_equal(inside_piece, model.add_variables(binary=True, coords=model.get_coords()))
+ assert_var_equal(lambda0, model.add_variables(lower=0, upper=1, coords=model.get_coords()))
+ assert_var_equal(lambda1, model.add_variables(lower=0, upper=1, coords=model.get_coords()))
# Check that the inside_piece constraint exists
assert f'Converter|Piece_{i}|inside_piece' in model.constraints
@@ -375,10 +377,9 @@ def test_piecewise_conversion(self, basic_flow_system_linopy):
<= 1,
)
- def test_piecewise_conversion_with_onoff(self, basic_flow_system_linopy):
+ def test_piecewise_conversion_with_onoff(self, basic_flow_system_linopy_coords, coords_config):
"""Test a LinearConverter with PiecewiseConversion and OnOffParameters."""
- flow_system = basic_flow_system_linopy
- timesteps = flow_system.time_series_collection.timesteps
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
# Create input and output flows
input_flow = fx.Flow('input', bus='input_bus', size=100)
@@ -396,7 +397,7 @@ def test_piecewise_conversion_with_onoff(self, basic_flow_system_linopy):
# Create OnOffParameters
on_off_params = fx.OnOffParameters(
- on_hours_total_min=10, on_hours_total_max=40, effects_per_running_hour={'Costs': 5}
+ on_hours_total_min=10, on_hours_total_max=40, effects_per_running_hour={'costs': 5}
)
# Create a linear converter with piecewise conversion and on/off parameters
@@ -418,11 +419,11 @@ def test_piecewise_conversion_with_onoff(self, basic_flow_system_linopy):
# Create model with the piecewise conversion
model = create_linopy_model(flow_system)
- # Verify that PiecewiseModel was created and added as a sub_model
- assert converter.model.piecewise_conversion is not None
+ # Verify that PiecewiseModel was created and added as a submodel
+ assert converter.submodel.piecewise_conversion is not None
# Get the PiecewiseModel instance
- piecewise_model = converter.model.piecewise_conversion
+ piecewise_model = converter.submodel.piecewise_conversion
# Check that we have the expected pieces (2 in this case)
assert len(piecewise_model.pieces) == 2
@@ -442,9 +443,9 @@ def test_piecewise_conversion_with_onoff(self, basic_flow_system_linopy):
lambda1 = model.variables[f'Converter|Piece_{i}|lambda1']
inside_piece = model.variables[f'Converter|Piece_{i}|inside_piece']
- assert_var_equal(inside_piece, model.add_variables(binary=True, coords=(timesteps,)))
- assert_var_equal(lambda0, model.add_variables(lower=0, upper=1, coords=(timesteps,)))
- assert_var_equal(lambda1, model.add_variables(lower=0, upper=1, coords=(timesteps,)))
+ assert_var_equal(inside_piece, model.add_variables(binary=True, coords=model.get_coords()))
+ assert_var_equal(lambda0, model.add_variables(lower=0, upper=1, coords=model.get_coords()))
+ assert_var_equal(lambda1, model.add_variables(lower=0, upper=1, coords=model.get_coords()))
# Check that the inside_piece constraint exists
assert f'Converter|Piece_{i}|inside_piece' in model.constraints
@@ -483,16 +484,14 @@ def test_piecewise_conversion_with_onoff(self, basic_flow_system_linopy):
assert 'Converter|on_hours_total' in model.constraints
assert_conequal(
model.constraints['Converter|on_hours_total'],
- converter.model.on_off.variables['Converter|on_hours_total']
- == (converter.model.on_off.variables['Converter|on'] * model.hours_per_step).sum(),
+ model['Converter|on_hours_total'] == (model['Converter|on'] * model.hours_per_step).sum('time'),
)
# Verify that the costs effect is applied
- assert 'Converter->Costs(operation)' in model.constraints
+ assert 'Converter->costs(temporal)' in model.constraints
assert_conequal(
- model.constraints['Converter->Costs(operation)'],
- model.variables['Converter->Costs(operation)']
- == converter.model.on_off.variables['Converter|on'] * model.hours_per_step * 5,
+ model.constraints['Converter->costs(temporal)'],
+ model.variables['Converter->costs(temporal)'] == model.variables['Converter|on'] * model.hours_per_step * 5,
)
diff --git a/tests/test_on_hours_computation.py b/tests/test_on_hours_computation.py
index cd0e637d0..578fd7792 100644
--- a/tests/test_on_hours_computation.py
+++ b/tests/test_on_hours_computation.py
@@ -1,108 +1,99 @@
import numpy as np
import pytest
+import xarray as xr
-from flixopt.features import ConsecutiveStateModel, StateModel
+from flixopt.modeling import ModelingUtilities
class TestComputeConsecutiveDuration:
- """Tests for the compute_consecutive_duration static method."""
+ """Tests for the compute_consecutive_hours_in_state static method."""
@pytest.mark.parametrize(
'binary_values, hours_per_timestep, expected',
[
- # Case 1: Both scalar inputs
- (1, 5, 5),
- (0, 3, 0),
- # Case 2: Scalar binary, array hours
- (1, np.array([1, 2, 3]), 3),
- (0, np.array([2, 4, 6]), 0),
- # Case 3: Array binary, scalar hours
- (np.array([0, 0, 1, 1, 1, 0]), 2, 0),
- (np.array([0, 1, 1, 0, 1, 1]), 1, 2),
- (np.array([1, 1, 1]), 2, 6),
- # Case 4: Both array inputs
- (np.array([0, 1, 1, 0, 1, 1]), np.array([1, 2, 3, 4, 5, 6]), 11), # 5+6
- (np.array([1, 0, 0, 1, 1, 1]), np.array([2, 2, 2, 3, 4, 5]), 12), # 3+4+5
- # Case 5: Edge cases
- (np.array([1]), np.array([4]), 4),
- (np.array([0]), np.array([3]), 0),
+ # Case 1: Single timestep DataArrays
+ (xr.DataArray([1], dims=['time']), 5, 5),
+ (xr.DataArray([0], dims=['time']), 3, 0),
+ # Case 2: Array binary, scalar hours
+ (xr.DataArray([0, 0, 1, 1, 1, 0], dims=['time']), 2, 0),
+ (xr.DataArray([0, 1, 1, 0, 1, 1], dims=['time']), 1, 2),
+ (xr.DataArray([1, 1, 1], dims=['time']), 2, 6),
+ # Case 3: Edge cases
+ (xr.DataArray([1], dims=['time']), 4, 4),
+ (xr.DataArray([0], dims=['time']), 3, 0),
+ # Case 4: More complex patterns
+ (xr.DataArray([1, 0, 0, 1, 1, 1], dims=['time']), 2, 6), # 3 consecutive at end * 2 hours
+ (xr.DataArray([0, 1, 1, 1, 0, 0], dims=['time']), 1, 0), # ends with 0
],
)
def test_compute_duration(self, binary_values, hours_per_timestep, expected):
- """Test compute_consecutive_duration with various inputs."""
- result = ConsecutiveStateModel.compute_consecutive_hours_in_state(binary_values, hours_per_timestep)
+ """Test compute_consecutive_hours_in_state with various inputs."""
+ result = ModelingUtilities.compute_consecutive_hours_in_state(binary_values, hours_per_timestep)
assert np.isclose(result, expected)
@pytest.mark.parametrize(
'binary_values, hours_per_timestep',
[
- # Case: Incompatible array lengths
- (np.array([1, 1, 1, 1, 1]), np.array([1, 2])),
+ # Case: hours_per_timestep must be scalar
+ (xr.DataArray([1, 1, 1, 1, 1], dims=['time']), np.array([1, 2])),
],
)
def test_compute_duration_raises_error(self, binary_values, hours_per_timestep):
"""Test error conditions."""
- with pytest.raises(ValueError):
- ConsecutiveStateModel.compute_consecutive_hours_in_state(binary_values, hours_per_timestep)
+ with pytest.raises(TypeError):
+ ModelingUtilities.compute_consecutive_hours_in_state(binary_values, hours_per_timestep)
class TestComputePreviousOnStates:
- """Tests for the compute_previous_on_states static method."""
+ """Tests for the compute_previous_states static method."""
@pytest.mark.parametrize(
'previous_values, expected',
[
- # Case 1: Empty list
- ([], np.array([0])),
- # Case 2: All None values
- ([None, None], np.array([0])),
- # Case 3: Single value arrays
- ([np.array([0])], np.array([0])),
- ([np.array([1])], np.array([1])),
- ([np.array([0.001])], np.array([1])), # Using default epsilon
- ([np.array([1e-4])], np.array([1])),
- ([np.array([1e-8])], np.array([0])),
- # Case 4: Multiple 1D arrays
- ([np.array([0, 5, 0]), np.array([0, 0, 1])], np.array([0, 1, 1])),
- ([np.array([0.1, 0, 0.3]), None, np.array([0, 0, 0])], np.array([1, 0, 1])),
- ([np.array([0, 0, 0]), np.array([0, 1, 0])], np.array([0, 1, 0])),
- ([np.array([0.1, 0, 0]), np.array([0, 0, 0.2])], np.array([1, 0, 1])),
- # Case 6: Mix of None and 1D arrays
- ([None, np.array([0, 0, 0]), np.array([0, 1, 0]), np.array([0, 0, 0])], np.array([0, 1, 0])),
- ([np.array([0, 0, 0]), None, np.array([0, 0, 0]), np.array([0, 0, 0])], np.array([0, 0, 0])),
+ # Case 1: Single value DataArrays
+ (xr.DataArray([0], dims=['time']), xr.DataArray([0], dims=['time'])),
+ (xr.DataArray([1], dims=['time']), xr.DataArray([1], dims=['time'])),
+ (xr.DataArray([0.001], dims=['time']), xr.DataArray([1], dims=['time'])), # Using default epsilon
+ (xr.DataArray([1e-4], dims=['time']), xr.DataArray([1], dims=['time'])),
+ (xr.DataArray([1e-8], dims=['time']), xr.DataArray([0], dims=['time'])),
+ # Case 1: Multiple timestep DataArrays
+ (xr.DataArray([0, 5, 0], dims=['time']), xr.DataArray([0, 1, 0], dims=['time'])),
+ (xr.DataArray([0.1, 0, 0.3], dims=['time']), xr.DataArray([1, 0, 1], dims=['time'])),
+ (xr.DataArray([0, 0, 0], dims=['time']), xr.DataArray([0, 0, 0], dims=['time'])),
+ (xr.DataArray([0.1, 0, 0.2], dims=['time']), xr.DataArray([1, 0, 1], dims=['time'])),
],
)
def test_compute_previous_on_states(self, previous_values, expected):
- """Test compute_previous_on_states with various inputs."""
- result = StateModel.compute_previous_states(previous_values)
- np.testing.assert_array_equal(result, expected)
+ """Test compute_previous_states with various inputs."""
+ result = ModelingUtilities.compute_previous_states(previous_values)
+ xr.testing.assert_equal(result, expected)
@pytest.mark.parametrize(
'previous_values, epsilon, expected',
[
# Testing with different epsilon values
- ([np.array([1e-6, 1e-4, 1e-2])], 1e-3, np.array([0, 0, 1])),
- ([np.array([1e-6, 1e-4, 1e-2])], 1e-5, np.array([0, 1, 1])),
- ([np.array([1e-6, 1e-4, 1e-2])], 1e-1, np.array([0, 0, 0])),
+ (xr.DataArray([1e-6, 1e-4, 1e-2], dims=['time']), 1e-3, xr.DataArray([0, 0, 1], dims=['time'])),
+ (xr.DataArray([1e-6, 1e-4, 1e-2], dims=['time']), 1e-5, xr.DataArray([0, 1, 1], dims=['time'])),
+ (xr.DataArray([1e-6, 1e-4, 1e-2], dims=['time']), 1e-1, xr.DataArray([0, 0, 0], dims=['time'])),
# Mixed case with custom epsilon
- ([np.array([0.05, 0.005, 0.0005])], 0.01, np.array([1, 0, 0])),
+ (xr.DataArray([0.05, 0.005, 0.0005], dims=['time']), 0.01, xr.DataArray([1, 0, 0], dims=['time'])),
],
)
def test_compute_previous_on_states_with_epsilon(self, previous_values, epsilon, expected):
- """Test compute_previous_on_states with custom epsilon values."""
- result = StateModel.compute_previous_states(previous_values, epsilon)
- np.testing.assert_array_equal(result, expected)
+ """Test compute_previous_states with custom epsilon values."""
+ result = ModelingUtilities.compute_previous_states(previous_values, epsilon)
+ xr.testing.assert_equal(result, expected)
@pytest.mark.parametrize(
'previous_values, expected_shape',
[
# Check that output shapes match expected dimensions
- ([np.array([0, 1, 0, 1])], (4,)),
- ([np.array([0, 1]), np.array([1, 0]), np.array([0, 0])], (2,)),
- ([np.array([0, 1]), np.array([1, 0])], (2,)),
+ (xr.DataArray([0, 1, 0, 1], dims=['time']), (4,)),
+ (xr.DataArray([0, 1], dims=['time']), (2,)),
+ (xr.DataArray([1, 0], dims=['time']), (2,)),
],
)
def test_output_shapes(self, previous_values, expected_shape):
"""Test that output array has the correct shape."""
- result = StateModel.compute_previous_states(previous_values)
+ result = ModelingUtilities.compute_previous_states(previous_values)
assert result.shape == expected_shape
diff --git a/tests/test_plots.py b/tests/test_plots.py
index 0c38f760c..61c26c510 100644
--- a/tests/test_plots.py
+++ b/tests/test_plots.py
@@ -7,7 +7,6 @@
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
-import plotly
import pytest
from flixopt import plotting
@@ -18,6 +17,14 @@ class TestPlots(unittest.TestCase):
def setUp(self):
np.random.seed(72)
+ def tearDown(self):
+ """Cleanup matplotlib and plotly resources"""
+ plt.close('all')
+ # Force garbage collection to cleanup any lingering resources
+ import gc
+
+ gc.collect()
+
@staticmethod
def get_sample_data(
nr_of_columns: int = 7,
@@ -51,38 +58,44 @@ def get_sample_data(
def test_bar_plots(self):
data = self.get_sample_data(nr_of_columns=10, nr_of_periods=1, time_steps_per_period=24)
- plotly.offline.plot(plotting.with_plotly(data, 'bar'))
- plotting.with_matplotlib(data, 'bar')
- plt.show()
+ # Create plotly figure (json renderer doesn't need .show())
+ _ = plotting.with_plotly(data, 'stacked_bar')
+ plotting.with_matplotlib(data, 'stacked_bar')
+ plt.savefig(f'test_plot_{self._testMethodName}.png', bbox_inches='tight')
+ plt.close('all') # Close all figures to prevent memory leaks
data = self.get_sample_data(
nr_of_columns=10, nr_of_periods=5, time_steps_per_period=24, drop_fraction_of_indices=0.3
)
- plotly.offline.plot(plotting.with_plotly(data, 'bar'))
- plotting.with_matplotlib(data, 'bar')
- plt.show()
+ # Create plotly figure (json renderer doesn't need .show())
+ _ = plotting.with_plotly(data, 'stacked_bar')
+ plotting.with_matplotlib(data, 'stacked_bar')
+ plt.savefig(f'test_plot_{self._testMethodName}.png', bbox_inches='tight')
+ plt.close('all') # Close all figures to prevent memory leaks
def test_line_plots(self):
data = self.get_sample_data(nr_of_columns=10, nr_of_periods=1, time_steps_per_period=24)
- plotly.offline.plot(plotting.with_plotly(data, 'line'))
+ _ = plotting.with_plotly(data, 'line')
plotting.with_matplotlib(data, 'line')
- plt.show()
+ plt.savefig(f'test_plot_{self._testMethodName}.png', bbox_inches='tight')
+ plt.close('all') # Close all figures to prevent memory leaks
data = self.get_sample_data(
nr_of_columns=10, nr_of_periods=5, time_steps_per_period=24, drop_fraction_of_indices=0.3
)
- plotly.offline.plot(plotting.with_plotly(data, 'line'))
+ _ = plotting.with_plotly(data, 'line')
plotting.with_matplotlib(data, 'line')
- plt.show()
+ plt.savefig(f'test_plot_{self._testMethodName}.png', bbox_inches='tight')
+ plt.close('all') # Close all figures to prevent memory leaks
def test_stacked_line_plots(self):
data = self.get_sample_data(nr_of_columns=10, nr_of_periods=1, time_steps_per_period=24)
- plotly.offline.plot(plotting.with_plotly(data, 'area'))
+ _ = plotting.with_plotly(data, 'area')
data = self.get_sample_data(
nr_of_columns=10, nr_of_periods=5, time_steps_per_period=24, drop_fraction_of_indices=0.3
)
- plotly.offline.plot(plotting.with_plotly(data, 'area'))
+ _ = plotting.with_plotly(data, 'area')
def test_heat_map_plots(self):
# Generate single-column data with datetime index for heatmap
@@ -91,9 +104,10 @@ def test_heat_map_plots(self):
# Convert data for heatmap plotting using 'day' as period and 'hour' steps
heatmap_data = plotting.reshape_to_2d(data.iloc[:, 0].values.flatten(), 24)
# Plotting heatmaps with Plotly and Matplotlib
- plotly.offline.plot(plotting.heat_map_plotly(pd.DataFrame(heatmap_data)))
+ _ = plotting.heat_map_plotly(pd.DataFrame(heatmap_data))
plotting.heat_map_matplotlib(pd.DataFrame(heatmap_data))
- plt.show()
+ plt.savefig(f'test_plot_{self._testMethodName}.png', bbox_inches='tight')
+ plt.close('all') # Close all figures to prevent memory leaks
def test_heat_map_plots_resampling(self):
date_range = pd.date_range(start='2023-01-01', end='2023-03-21', freq='5min')
@@ -113,21 +127,24 @@ def test_heat_map_plots_resampling(self):
data = df_irregular
# Convert data for heatmap plotting using 'day' as period and 'hour' steps
heatmap_data = plotting.heat_map_data_from_df(data, 'MS', 'D')
- plotly.offline.plot(plotting.heat_map_plotly(heatmap_data))
+ _ = plotting.heat_map_plotly(heatmap_data)
plotting.heat_map_matplotlib(pd.DataFrame(heatmap_data))
- plt.show()
+ plt.savefig(f'test_plot_{self._testMethodName}.png', bbox_inches='tight')
+ plt.close('all') # Close all figures to prevent memory leaks
heatmap_data = plotting.heat_map_data_from_df(data, 'W', 'h', fill='ffill')
# Plotting heatmaps with Plotly and Matplotlib
- plotly.offline.plot(plotting.heat_map_plotly(pd.DataFrame(heatmap_data)))
+ _ = plotting.heat_map_plotly(pd.DataFrame(heatmap_data))
plotting.heat_map_matplotlib(pd.DataFrame(heatmap_data))
- plt.show()
+ plt.savefig(f'test_plot_{self._testMethodName}.png', bbox_inches='tight')
+ plt.close('all') # Close all figures to prevent memory leaks
heatmap_data = plotting.heat_map_data_from_df(data, 'D', 'h', fill='ffill')
# Plotting heatmaps with Plotly and Matplotlib
- plotly.offline.plot(plotting.heat_map_plotly(pd.DataFrame(heatmap_data)))
+ _ = plotting.heat_map_plotly(pd.DataFrame(heatmap_data))
plotting.heat_map_matplotlib(pd.DataFrame(heatmap_data))
- plt.show()
+ plt.savefig(f'test_plot_{self._testMethodName}.png', bbox_inches='tight')
+ plt.close('all') # Close all figures to prevent memory leaks
if __name__ == '__main__':
diff --git a/tests/test_results_plots.py b/tests/test_results_plots.py
index ec50555a3..35a219e31 100644
--- a/tests/test_results_plots.py
+++ b/tests/test_results_plots.py
@@ -59,12 +59,7 @@ def test_results_plots(flow_system, plotting_engine, show, save, color_spec):
)
results['Speicher'].plot_node_balance_pie(engine=plotting_engine, save=save, show=show, colors=color_spec)
-
- if plotting_engine == 'matplotlib':
- with pytest.raises(NotImplementedError):
- results['Speicher'].plot_charge_state(engine=plotting_engine)
- else:
- results['Speicher'].plot_charge_state(engine=plotting_engine)
+ results['Speicher'].plot_charge_state(engine=plotting_engine)
plt.close('all')
diff --git a/tests/test_scenarios.py b/tests/test_scenarios.py
new file mode 100644
index 000000000..3f0637c91
--- /dev/null
+++ b/tests/test_scenarios.py
@@ -0,0 +1,692 @@
+import numpy as np
+import pandas as pd
+import pytest
+from linopy.testing import assert_linequal
+
+import flixopt as fx
+from flixopt.commons import Effect, InvestParameters, Sink, Source, Storage
+from flixopt.elements import Bus, Flow
+from flixopt.flow_system import FlowSystem
+
+from .conftest import create_calculation_and_solve, create_linopy_model
+
+
+@pytest.fixture
+def test_system():
+ """Create a basic test system with scenarios."""
+ # Create a two-day time index with hourly resolution
+ timesteps = pd.date_range('2023-01-01', periods=48, freq='h', name='time')
+
+ # Create two scenarios
+ scenarios = pd.Index(['Scenario A', 'Scenario B'], name='scenario')
+
+ # Create scenario weights
+ weights = np.array([0.7, 0.3])
+
+ # Create a flow system with scenarios
+ flow_system = FlowSystem(
+ timesteps=timesteps,
+ scenarios=scenarios,
+ weights=weights, # Use TimeSeriesData for weights
+ )
+
+ # Create demand profiles that differ between scenarios
+ # Scenario A: Higher demand in first day, lower in second day
+ # Scenario B: Lower demand in first day, higher in second day
+ demand_profile_a = np.concatenate(
+ [
+ np.sin(np.linspace(0, 2 * np.pi, 24)) * 5 + 10, # Day 1, max ~15
+ np.sin(np.linspace(0, 2 * np.pi, 24)) * 2 + 5, # Day 2, max ~7
+ ]
+ )
+
+ demand_profile_b = np.concatenate(
+ [
+ np.sin(np.linspace(0, 2 * np.pi, 24)) * 2 + 5, # Day 1, max ~7
+ np.sin(np.linspace(0, 2 * np.pi, 24)) * 5 + 10, # Day 2, max ~15
+ ]
+ )
+
+ # Stack the profiles into a 2D array (time, scenario)
+ demand_profiles = np.column_stack([demand_profile_a, demand_profile_b])
+
+ # Create the necessary model elements
+ # Create buses
+ electricity_bus = Bus('Electricity')
+
+ # Create a demand sink with scenario-dependent profiles
+ demand = Flow(label='Demand', bus=electricity_bus.label_full, fixed_relative_profile=demand_profiles)
+ demand_sink = Sink('Demand', inputs=[demand])
+
+ # Create a power source with investment option
+ power_gen = Flow(
+ label='Generation',
+ bus=electricity_bus.label_full,
+ size=InvestParameters(
+ minimum_size=0,
+ maximum_size=20,
+ effects_of_investment_per_size={'costs': 100}, # €/kW
+ ),
+ effects_per_flow_hour={'costs': 20}, # €/MWh
+ )
+ generator = Source('Generator', outputs=[power_gen])
+
+ # Create a storage for electricity
+ storage_charge = Flow(label='Charge', bus=electricity_bus.label_full, size=10)
+ storage_discharge = Flow(label='Discharge', bus=electricity_bus.label_full, size=10)
+ storage = Storage(
+ label='Battery',
+ charging=storage_charge,
+ discharging=storage_discharge,
+ capacity_in_flow_hours=InvestParameters(
+ minimum_size=0,
+ maximum_size=50,
+ effects_of_investment_per_size={'costs': 50}, # €/kWh
+ ),
+ eta_charge=0.95,
+ eta_discharge=0.95,
+ initial_charge_state='lastValueOfSim',
+ )
+
+ # Create effects and objective
+ cost_effect = Effect(label='costs', unit='€', description='Total costs', is_standard=True, is_objective=True)
+
+ # Add all elements to the flow system
+ flow_system.add_elements(electricity_bus, generator, demand_sink, storage, cost_effect)
+
+ # Return the created system and its components
+ return {
+ 'flow_system': flow_system,
+ 'timesteps': timesteps,
+ 'scenarios': scenarios,
+ 'electricity_bus': electricity_bus,
+ 'demand': demand,
+ 'demand_sink': demand_sink,
+ 'generator': generator,
+ 'power_gen': power_gen,
+ 'storage': storage,
+ 'storage_charge': storage_charge,
+ 'storage_discharge': storage_discharge,
+ 'cost_effect': cost_effect,
+ }
+
+
+@pytest.fixture
+def flow_system_complex_scenarios() -> fx.FlowSystem:
+ """
+ Helper method to create a base model with configurable parameters
+ """
+ thermal_load = np.array([30, 0, 90, 110, 110, 20, 20, 20, 20])
+ electrical_load = np.array([40, 40, 40, 40, 40, 40, 40, 40, 40])
+ flow_system = fx.FlowSystem(
+ pd.date_range('2020-01-01', periods=9, freq='h', name='time'),
+ scenarios=pd.Index(['A', 'B', 'C'], name='scenario'),
+ )
+ # Define the components and flow_system
+ flow_system.add_elements(
+ fx.Effect('costs', '€', 'Kosten', is_standard=True, is_objective=True, share_from_temporal={'CO2': 0.2}),
+ fx.Effect('CO2', 'kg', 'CO2_e-Emissionen'),
+ fx.Effect('PE', 'kWh_PE', 'Primärenergie', maximum_total=3.5e3),
+ fx.Bus('Strom'),
+ fx.Bus('Fernwärme'),
+ fx.Bus('Gas'),
+ fx.Sink('Wärmelast', inputs=[fx.Flow('Q_th_Last', 'Fernwärme', size=1, fixed_relative_profile=thermal_load)]),
+ fx.Source(
+ 'Gastarif', outputs=[fx.Flow('Q_Gas', 'Gas', size=1000, effects_per_flow_hour={'costs': 0.04, 'CO2': 0.3})]
+ ),
+ fx.Sink('Einspeisung', inputs=[fx.Flow('P_el', 'Strom', effects_per_flow_hour=-1 * electrical_load)]),
+ )
+
+ boiler = fx.linear_converters.Boiler(
+ 'Kessel',
+ eta=0.5,
+ on_off_parameters=fx.OnOffParameters(effects_per_running_hour={'costs': 0, 'CO2': 1000}),
+ Q_th=fx.Flow(
+ 'Q_th',
+ bus='Fernwärme',
+ load_factor_max=1.0,
+ load_factor_min=0.1,
+ relative_minimum=5 / 50,
+ relative_maximum=1,
+ previous_flow_rate=50,
+ size=fx.InvestParameters(
+ effects_of_investment=1000,
+ fixed_size=50,
+ mandatory=True,
+ effects_of_investment_per_size={'costs': 10, 'PE': 2},
+ ),
+ on_off_parameters=fx.OnOffParameters(
+ on_hours_total_min=0,
+ on_hours_total_max=1000,
+ consecutive_on_hours_max=10,
+ consecutive_on_hours_min=1,
+ consecutive_off_hours_max=10,
+ effects_per_switch_on=0.01,
+ switch_on_total_max=1000,
+ ),
+ flow_hours_total_max=1e6,
+ ),
+ Q_fu=fx.Flow('Q_fu', bus='Gas', size=200, relative_minimum=0, relative_maximum=1),
+ )
+
+ invest_speicher = fx.InvestParameters(
+ effects_of_investment=0,
+ piecewise_effects_of_investment=fx.PiecewiseEffects(
+ piecewise_origin=fx.Piecewise([fx.Piece(5, 25), fx.Piece(25, 100)]),
+ piecewise_shares={
+ 'costs': fx.Piecewise([fx.Piece(50, 250), fx.Piece(250, 800)]),
+ 'PE': fx.Piecewise([fx.Piece(5, 25), fx.Piece(25, 100)]),
+ },
+ ),
+ mandatory=True,
+ effects_of_investment_per_size={'costs': 0.01, 'CO2': 0.01},
+ minimum_size=0,
+ maximum_size=1000,
+ )
+ speicher = fx.Storage(
+ 'Speicher',
+ charging=fx.Flow('Q_th_load', bus='Fernwärme', size=1e4),
+ discharging=fx.Flow('Q_th_unload', bus='Fernwärme', size=1e4),
+ capacity_in_flow_hours=invest_speicher,
+ initial_charge_state=0,
+ maximal_final_charge_state=10,
+ eta_charge=0.9,
+ eta_discharge=1,
+ relative_loss_per_hour=0.08,
+ prevent_simultaneous_charge_and_discharge=True,
+ )
+
+ flow_system.add_elements(boiler, speicher)
+
+ return flow_system
+
+
+@pytest.fixture
+def flow_system_piecewise_conversion_scenarios(flow_system_complex_scenarios) -> fx.FlowSystem:
+ """
+ Use segments/Piecewise with numeric data
+ """
+ flow_system = flow_system_complex_scenarios
+
+ flow_system.add_elements(
+ fx.LinearConverter(
+ 'KWK',
+ inputs=[fx.Flow('Q_fu', bus='Gas')],
+ outputs=[
+ fx.Flow('P_el', bus='Strom', size=60, relative_maximum=55, previous_flow_rate=10),
+ fx.Flow('Q_th', bus='Fernwärme'),
+ ],
+ piecewise_conversion=fx.PiecewiseConversion(
+ {
+ 'P_el': fx.Piecewise(
+ [
+ fx.Piece(np.linspace(5, 6, len(flow_system.timesteps)), 30),
+ fx.Piece(40, np.linspace(60, 70, len(flow_system.timesteps))),
+ ]
+ ),
+ 'Q_th': fx.Piecewise([fx.Piece(6, 35), fx.Piece(45, 100)]),
+ 'Q_fu': fx.Piecewise([fx.Piece(12, 70), fx.Piece(90, 200)]),
+ }
+ ),
+ on_off_parameters=fx.OnOffParameters(effects_per_switch_on=0.01),
+ )
+ )
+
+ return flow_system
+
+
+def test_weights(flow_system_piecewise_conversion_scenarios):
+ """Test that scenario weights are correctly used in the model."""
+ scenarios = flow_system_piecewise_conversion_scenarios.scenarios
+ weights = np.linspace(0.5, 1, len(scenarios))
+ flow_system_piecewise_conversion_scenarios.weights = weights
+ model = create_linopy_model(flow_system_piecewise_conversion_scenarios)
+ normalized_weights = (
+ flow_system_piecewise_conversion_scenarios.weights / flow_system_piecewise_conversion_scenarios.weights.sum()
+ )
+ np.testing.assert_allclose(model.weights.values, normalized_weights)
+ assert_linequal(
+ model.objective.expression, (model.variables['costs'] * normalized_weights).sum() + model.variables['Penalty']
+ )
+ assert np.isclose(model.weights.sum().item(), 1)
+
+
+def test_weights_io(flow_system_piecewise_conversion_scenarios):
+ """Test that scenario weights are correctly used in the model."""
+ scenarios = flow_system_piecewise_conversion_scenarios.scenarios
+ weights = np.linspace(0.5, 1, len(scenarios)) / np.sum(np.linspace(0.5, 1, len(scenarios)))
+ flow_system_piecewise_conversion_scenarios.weights = weights
+ model = create_linopy_model(flow_system_piecewise_conversion_scenarios)
+ np.testing.assert_allclose(model.weights.values, weights)
+ assert_linequal(model.objective.expression, (model.variables['costs'] * weights).sum() + model.variables['Penalty'])
+ assert np.isclose(model.weights.sum().item(), 1.0)
+
+
+def test_scenario_dimensions_in_variables(flow_system_piecewise_conversion_scenarios):
+ """Test that all time variables are correctly broadcasted to scenario dimensions."""
+ model = create_linopy_model(flow_system_piecewise_conversion_scenarios)
+ for var in model.variables:
+ assert model.variables[var].dims in [('time', 'scenario'), ('scenario',), ()]
+
+
+def test_full_scenario_optimization(flow_system_piecewise_conversion_scenarios):
+ """Test a full optimization with scenarios and verify results."""
+ scenarios = flow_system_piecewise_conversion_scenarios.scenarios
+ weights = np.linspace(0.5, 1, len(scenarios)) / np.sum(np.linspace(0.5, 1, len(scenarios)))
+ flow_system_piecewise_conversion_scenarios.weights = weights
+ calc = create_calculation_and_solve(
+ flow_system_piecewise_conversion_scenarios,
+ solver=fx.solvers.GurobiSolver(mip_gap=0.01, time_limit_seconds=60),
+ name='test_full_scenario',
+ )
+ calc.results.to_file()
+
+ res = fx.results.CalculationResults.from_file('results', 'test_full_scenario')
+ fx.FlowSystem.from_dataset(res.flow_system_data)
+ calc = create_calculation_and_solve(
+ flow_system_piecewise_conversion_scenarios,
+ solver=fx.solvers.GurobiSolver(mip_gap=0.01, time_limit_seconds=60),
+ name='test_full_scenario',
+ )
+
+
+@pytest.mark.skip(reason='This test is taking too long with highs and is too big for gurobipy free')
+def test_io_persistance(flow_system_piecewise_conversion_scenarios):
+ """Test a full optimization with scenarios and verify results."""
+ scenarios = flow_system_piecewise_conversion_scenarios.scenarios
+ weights = np.linspace(0.5, 1, len(scenarios)) / np.sum(np.linspace(0.5, 1, len(scenarios)))
+ flow_system_piecewise_conversion_scenarios.weights = weights
+ calc = create_calculation_and_solve(
+ flow_system_piecewise_conversion_scenarios,
+ solver=fx.solvers.HighsSolver(mip_gap=0.001, time_limit_seconds=60),
+ name='test_full_scenario',
+ )
+ calc.results.to_file()
+
+ res = fx.results.CalculationResults.from_file('results', 'test_full_scenario')
+ flow_system_2 = fx.FlowSystem.from_dataset(res.flow_system_data)
+ calc_2 = create_calculation_and_solve(
+ flow_system_2,
+ solver=fx.solvers.HighsSolver(mip_gap=0.001, time_limit_seconds=60),
+ name='test_full_scenario_2',
+ )
+
+ np.testing.assert_allclose(calc.results.objective, calc_2.results.objective, rtol=0.001)
+
+
+def test_scenarios_selection(flow_system_piecewise_conversion_scenarios):
+ flow_system_full = flow_system_piecewise_conversion_scenarios
+ scenarios = flow_system_full.scenarios
+ weights = np.linspace(0.5, 1, len(scenarios)) / np.sum(np.linspace(0.5, 1, len(scenarios)))
+ flow_system_full.weights = weights
+ flow_system = flow_system_full.sel(scenario=scenarios[0:2])
+
+ assert flow_system.scenarios.equals(flow_system_full.scenarios[0:2])
+
+ np.testing.assert_allclose(flow_system.weights.values, flow_system_full.weights[0:2])
+
+ calc = fx.FullCalculation(flow_system=flow_system, name='test_full_scenario', normalize_weights=False)
+ calc.do_modeling()
+ calc.solve(fx.solvers.GurobiSolver(mip_gap=0.01, time_limit_seconds=60))
+
+ calc.results.to_file()
+
+ np.testing.assert_allclose(
+ calc.results.objective,
+ ((calc.results.solution['costs'] * flow_system.weights).sum() + calc.results.solution['Penalty']).item(),
+ ) ## Account for rounding errors
+
+ assert calc.results.solution.indexes['scenario'].equals(flow_system_full.scenarios[0:2])
+
+
+def test_sizes_per_scenario_default():
+ """Test that scenario_independent_sizes defaults to True (sizes equalized) and flow_rates to False (vary)."""
+ timesteps = pd.date_range('2023-01-01', periods=24, freq='h')
+ scenarios = pd.Index(['base', 'high'], name='scenario')
+
+ fs = fx.FlowSystem(timesteps=timesteps, scenarios=scenarios)
+
+ assert fs.scenario_independent_sizes is True
+ assert fs.scenario_independent_flow_rates is False
+
+
+def test_sizes_per_scenario_bool():
+ """Test scenario_independent_sizes with boolean values."""
+ timesteps = pd.date_range('2023-01-01', periods=24, freq='h')
+ scenarios = pd.Index(['base', 'high'], name='scenario')
+
+ # Test False (vary per scenario)
+ fs1 = fx.FlowSystem(timesteps=timesteps, scenarios=scenarios, scenario_independent_sizes=False)
+ assert fs1.scenario_independent_sizes is False
+
+ # Test True (equalized across scenarios)
+ fs2 = fx.FlowSystem(timesteps=timesteps, scenarios=scenarios, scenario_independent_sizes=True)
+ assert fs2.scenario_independent_sizes is True
+
+
+def test_sizes_per_scenario_list():
+ """Test scenario_independent_sizes with list of element labels."""
+ timesteps = pd.date_range('2023-01-01', periods=24, freq='h')
+ scenarios = pd.Index(['base', 'high'], name='scenario')
+
+ fs = fx.FlowSystem(
+ timesteps=timesteps,
+ scenarios=scenarios,
+ scenario_independent_sizes=['solar->grid', 'battery->grid'],
+ )
+
+ assert fs.scenario_independent_sizes == ['solar->grid', 'battery->grid']
+
+
+def test_flow_rates_per_scenario_default():
+ """Test that scenario_independent_flow_rates defaults to False (flow rates vary by scenario)."""
+ timesteps = pd.date_range('2023-01-01', periods=24, freq='h')
+ scenarios = pd.Index(['base', 'high'], name='scenario')
+
+ fs = fx.FlowSystem(timesteps=timesteps, scenarios=scenarios)
+
+ assert fs.scenario_independent_flow_rates is False
+
+
+def test_flow_rates_per_scenario_bool():
+ """Test scenario_independent_flow_rates with boolean values."""
+ timesteps = pd.date_range('2023-01-01', periods=24, freq='h')
+ scenarios = pd.Index(['base', 'high'], name='scenario')
+
+ # Test False (vary per scenario)
+ fs1 = fx.FlowSystem(timesteps=timesteps, scenarios=scenarios, scenario_independent_flow_rates=False)
+ assert fs1.scenario_independent_flow_rates is False
+
+ # Test True (equalized across scenarios)
+ fs2 = fx.FlowSystem(timesteps=timesteps, scenarios=scenarios, scenario_independent_flow_rates=True)
+ assert fs2.scenario_independent_flow_rates is True
+
+
+def test_scenario_parameters_property_setters():
+ """Test that scenario parameters can be changed via property setters."""
+ timesteps = pd.date_range('2023-01-01', periods=24, freq='h')
+ scenarios = pd.Index(['base', 'high'], name='scenario')
+
+ fs = fx.FlowSystem(timesteps=timesteps, scenarios=scenarios)
+
+ # Change scenario_independent_sizes
+ fs.scenario_independent_sizes = True
+ assert fs.scenario_independent_sizes is True
+
+ fs.scenario_independent_sizes = ['component1', 'component2']
+ assert fs.scenario_independent_sizes == ['component1', 'component2']
+
+ # Change scenario_independent_flow_rates
+ fs.scenario_independent_flow_rates = True
+ assert fs.scenario_independent_flow_rates is True
+
+ fs.scenario_independent_flow_rates = ['flow1', 'flow2']
+ assert fs.scenario_independent_flow_rates == ['flow1', 'flow2']
+
+
+def test_scenario_parameters_validation():
+ """Test that scenario parameters are validated correctly."""
+ timesteps = pd.date_range('2023-01-01', periods=24, freq='h')
+ scenarios = pd.Index(['base', 'high'], name='scenario')
+
+ fs = fx.FlowSystem(timesteps=timesteps, scenarios=scenarios)
+
+ # Test invalid type
+ with pytest.raises(TypeError, match='must be bool or list'):
+ fs.scenario_independent_sizes = 'invalid'
+
+ # Test invalid list content
+ with pytest.raises(ValueError, match='must contain only strings'):
+ fs.scenario_independent_sizes = [1, 2, 3]
+
+
+def test_size_equality_constraints():
+ """Test that size equality constraints are created when scenario_independent_sizes=True."""
+ timesteps = pd.date_range('2023-01-01', periods=24, freq='h')
+ scenarios = pd.Index(['base', 'high'], name='scenario')
+
+ fs = fx.FlowSystem(
+ timesteps=timesteps,
+ scenarios=scenarios,
+ scenario_independent_sizes=True, # Sizes should be equalized
+ scenario_independent_flow_rates=False, # Flow rates can vary
+ )
+
+ bus = fx.Bus('grid')
+ source = fx.Source(
+ label='solar',
+ outputs=[
+ fx.Flow(
+ label='out',
+ bus='grid',
+ size=fx.InvestParameters(
+ minimum_size=10,
+ maximum_size=100,
+ effects_of_investment_per_size={'cost': 100},
+ ),
+ )
+ ],
+ )
+
+ fs.add_elements(bus, source, fx.Effect('cost', 'Total cost', '€', is_objective=True))
+
+ calc = fx.FullCalculation('test', fs)
+ calc.do_modeling()
+
+ # Check that size equality constraint exists
+ constraint_names = [str(c) for c in calc.model.constraints]
+ size_constraints = [c for c in constraint_names if 'scenario_independent' in c and 'size' in c]
+
+ assert len(size_constraints) > 0, 'Size equality constraint should exist'
+
+
+def test_flow_rate_equality_constraints():
+ """Test that flow_rate equality constraints are created when scenario_independent_flow_rates=True."""
+ timesteps = pd.date_range('2023-01-01', periods=24, freq='h')
+ scenarios = pd.Index(['base', 'high'], name='scenario')
+
+ fs = fx.FlowSystem(
+ timesteps=timesteps,
+ scenarios=scenarios,
+ scenario_independent_sizes=False, # Sizes can vary
+ scenario_independent_flow_rates=True, # Flow rates should be equalized
+ )
+
+ bus = fx.Bus('grid')
+ source = fx.Source(
+ label='solar',
+ outputs=[
+ fx.Flow(
+ label='out',
+ bus='grid',
+ size=fx.InvestParameters(
+ minimum_size=10,
+ maximum_size=100,
+ effects_of_investment_per_size={'cost': 100},
+ ),
+ )
+ ],
+ )
+
+ fs.add_elements(bus, source, fx.Effect('cost', 'Total cost', '€', is_objective=True))
+
+ calc = fx.FullCalculation('test', fs)
+ calc.do_modeling()
+
+ # Check that flow_rate equality constraint exists
+ constraint_names = [str(c) for c in calc.model.constraints]
+ flow_rate_constraints = [c for c in constraint_names if 'scenario_independent' in c and 'flow_rate' in c]
+
+ assert len(flow_rate_constraints) > 0, 'Flow rate equality constraint should exist'
+
+
+def test_selective_scenario_independence():
+ """Test selective scenario independence with specific element lists."""
+ timesteps = pd.date_range('2023-01-01', periods=24, freq='h')
+ scenarios = pd.Index(['base', 'high'], name='scenario')
+
+ fs = fx.FlowSystem(
+ timesteps=timesteps,
+ scenarios=scenarios,
+ scenario_independent_sizes=['solar(out)'], # Only solar size is equalized
+ scenario_independent_flow_rates=['demand(in)'], # Only demand flow_rate is equalized
+ )
+
+ bus = fx.Bus('grid')
+ source = fx.Source(
+ label='solar',
+ outputs=[
+ fx.Flow(
+ label='out',
+ bus='grid',
+ size=fx.InvestParameters(
+ minimum_size=10, maximum_size=100, effects_of_investment_per_size={'cost': 100}
+ ),
+ )
+ ],
+ )
+ sink = fx.Sink(
+ label='demand',
+ inputs=[fx.Flow(label='in', bus='grid', size=50)],
+ )
+
+ fs.add_elements(bus, source, sink, fx.Effect('cost', 'Total cost', '€', is_objective=True))
+
+ calc = fx.FullCalculation('test', fs)
+ calc.do_modeling()
+
+ constraint_names = [str(c) for c in calc.model.constraints]
+
+ # Solar SHOULD have size constraints (it's in the list, so equalized)
+ solar_size_constraints = [c for c in constraint_names if 'solar(out)|size' in c and 'scenario_independent' in c]
+ assert len(solar_size_constraints) > 0
+
+ # Solar should NOT have flow_rate constraints (not in the list, so varies per scenario)
+ solar_flow_constraints = [
+ c for c in constraint_names if 'solar(out)|flow_rate' in c and 'scenario_independent' in c
+ ]
+ assert len(solar_flow_constraints) == 0
+
+ # Demand should NOT have size constraints (no InvestParameters, size is fixed)
+ demand_size_constraints = [c for c in constraint_names if 'demand(in)|size' in c and 'scenario_independent' in c]
+ assert len(demand_size_constraints) == 0
+
+ # Demand SHOULD have flow_rate constraints (it's in the list, so equalized)
+ demand_flow_constraints = [
+ c for c in constraint_names if 'demand(in)|flow_rate' in c and 'scenario_independent' in c
+ ]
+ assert len(demand_flow_constraints) > 0
+
+
+def test_scenario_parameters_io_persistence():
+ """Test that scenario_independent_sizes and scenario_independent_flow_rates persist through IO operations."""
+ import shutil
+ import tempfile
+
+ timesteps = pd.date_range('2023-01-01', periods=24, freq='h')
+ scenarios = pd.Index(['base', 'high'], name='scenario')
+
+ # Create FlowSystem with custom scenario parameters
+ fs_original = fx.FlowSystem(
+ timesteps=timesteps,
+ scenarios=scenarios,
+ scenario_independent_sizes=['solar(out)'],
+ scenario_independent_flow_rates=True,
+ )
+
+ bus = fx.Bus('grid')
+ source = fx.Source(
+ label='solar',
+ outputs=[
+ fx.Flow(
+ label='out',
+ bus='grid',
+ size=fx.InvestParameters(
+ minimum_size=10, maximum_size=100, effects_of_investment_per_size={'cost': 100}
+ ),
+ )
+ ],
+ )
+
+ fs_original.add_elements(bus, source, fx.Effect('cost', 'Total cost', '€', is_objective=True))
+
+ # Save to dataset
+ fs_original.connect_and_transform()
+ ds = fs_original.to_dataset()
+
+ # Load from dataset
+ fs_loaded = fx.FlowSystem.from_dataset(ds)
+
+ # Verify parameters persisted
+ assert fs_loaded.scenario_independent_sizes == fs_original.scenario_independent_sizes
+ assert fs_loaded.scenario_independent_flow_rates == fs_original.scenario_independent_flow_rates
+
+
+def test_scenario_parameters_io_with_calculation():
+ """Test that scenario parameters persist through full calculation IO."""
+ import shutil
+ import tempfile
+
+ timesteps = pd.date_range('2023-01-01', periods=24, freq='h')
+ scenarios = pd.Index(['base', 'high'], name='scenario')
+
+ fs = fx.FlowSystem(
+ timesteps=timesteps,
+ scenarios=scenarios,
+ scenario_independent_sizes=True,
+ scenario_independent_flow_rates=['demand(in)'],
+ )
+
+ bus = fx.Bus('grid')
+ source = fx.Source(
+ label='solar',
+ outputs=[
+ fx.Flow(
+ label='out',
+ bus='grid',
+ size=fx.InvestParameters(
+ minimum_size=10, maximum_size=100, effects_of_investment_per_size={'cost': 100}
+ ),
+ )
+ ],
+ )
+ sink = fx.Sink(
+ label='demand',
+ inputs=[fx.Flow(label='in', bus='grid', size=50)],
+ )
+
+ fs.add_elements(bus, source, sink, fx.Effect('cost', 'Total cost', '€', is_objective=True))
+
+ # Create temp directory for results
+ temp_dir = tempfile.mkdtemp()
+
+ try:
+ # Solve and save
+ calc = fx.FullCalculation('test_io', fs, folder=temp_dir)
+ calc.do_modeling()
+ calc.solve(fx.solvers.HighsSolver(mip_gap=0.01, time_limit_seconds=60))
+ calc.results.to_file()
+
+ # Load results
+ results = fx.results.CalculationResults.from_file(temp_dir, 'test_io')
+ fs_loaded = fx.FlowSystem.from_dataset(results.flow_system_data)
+
+ # Verify parameters persisted
+ assert fs_loaded.scenario_independent_sizes == fs.scenario_independent_sizes
+ assert fs_loaded.scenario_independent_flow_rates == fs.scenario_independent_flow_rates
+
+ # Verify constraints are recreated correctly
+ calc2 = fx.FullCalculation('test_io_2', fs_loaded, folder=temp_dir)
+ calc2.do_modeling()
+
+ constraint_names1 = [str(c) for c in calc.model.constraints]
+ constraint_names2 = [str(c) for c in calc2.model.constraints]
+
+ size_constraints1 = [c for c in constraint_names1 if 'scenario_independent' in c and 'size' in c]
+ size_constraints2 = [c for c in constraint_names2 if 'scenario_independent' in c and 'size' in c]
+
+ assert len(size_constraints1) == len(size_constraints2)
+
+ finally:
+ # Clean up
+ shutil.rmtree(temp_dir)
diff --git a/tests/test_storage.py b/tests/test_storage.py
index 5971c2f5c..8d0c495c2 100644
--- a/tests/test_storage.py
+++ b/tests/test_storage.py
@@ -1,3 +1,4 @@
+import numpy as np
import pytest
import flixopt as fx
@@ -8,11 +9,9 @@
class TestStorageModel:
"""Test that storage model variables and constraints are correctly generated."""
- def test_basic_storage(self, basic_flow_system_linopy):
+ def test_basic_storage(self, basic_flow_system_linopy_coords, coords_config):
"""Test that basic storage model variables and constraints are correctly generated."""
- flow_system = basic_flow_system_linopy
- timesteps = flow_system.time_series_collection.timesteps
- timesteps_extra = flow_system.time_series_collection.timesteps_extra
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
# Create a simple storage
storage = fx.Storage(
@@ -52,13 +51,14 @@ def test_basic_storage(self, basic_flow_system_linopy):
# Check variable properties
assert_var_equal(
- model['TestStorage(Q_th_in)|flow_rate'], model.add_variables(lower=0, upper=20, coords=(timesteps,))
+ model['TestStorage(Q_th_in)|flow_rate'], model.add_variables(lower=0, upper=20, coords=model.get_coords())
)
assert_var_equal(
- model['TestStorage(Q_th_out)|flow_rate'], model.add_variables(lower=0, upper=20, coords=(timesteps,))
+ model['TestStorage(Q_th_out)|flow_rate'], model.add_variables(lower=0, upper=20, coords=model.get_coords())
)
assert_var_equal(
- model['TestStorage|charge_state'], model.add_variables(lower=0, upper=30, coords=(timesteps_extra,))
+ model['TestStorage|charge_state'],
+ model.add_variables(lower=0, upper=30, coords=model.get_coords(extra_timestep=True)),
)
# Check constraint formulations
@@ -82,11 +82,9 @@ def test_basic_storage(self, basic_flow_system_linopy):
model.variables['TestStorage|charge_state'].isel(time=0) == 0,
)
- def test_lossy_storage(self, basic_flow_system_linopy):
+ def test_lossy_storage(self, basic_flow_system_linopy_coords, coords_config):
"""Test that basic storage model variables and constraints are correctly generated."""
- flow_system = basic_flow_system_linopy
- timesteps = flow_system.time_series_collection.timesteps
- timesteps_extra = flow_system.time_series_collection.timesteps_extra
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
# Create a simple storage
storage = fx.Storage(
@@ -129,13 +127,14 @@ def test_lossy_storage(self, basic_flow_system_linopy):
# Check variable properties
assert_var_equal(
- model['TestStorage(Q_th_in)|flow_rate'], model.add_variables(lower=0, upper=20, coords=(timesteps,))
+ model['TestStorage(Q_th_in)|flow_rate'], model.add_variables(lower=0, upper=20, coords=model.get_coords())
)
assert_var_equal(
- model['TestStorage(Q_th_out)|flow_rate'], model.add_variables(lower=0, upper=20, coords=(timesteps,))
+ model['TestStorage(Q_th_out)|flow_rate'], model.add_variables(lower=0, upper=20, coords=model.get_coords())
)
assert_var_equal(
- model['TestStorage|charge_state'], model.add_variables(lower=0, upper=30, coords=(timesteps_extra,))
+ model['TestStorage|charge_state'],
+ model.add_variables(lower=0, upper=30, coords=model.get_coords(extra_timestep=True)),
)
# Check constraint formulations
@@ -167,9 +166,94 @@ def test_lossy_storage(self, basic_flow_system_linopy):
model.variables['TestStorage|charge_state'].isel(time=0) == 0,
)
- def test_storage_with_investment(self, basic_flow_system_linopy):
+ def test_charge_state_bounds(self, basic_flow_system_linopy_coords, coords_config):
+ """Test that basic storage model variables and constraints are correctly generated."""
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
+
+ # Create a simple storage
+ storage = fx.Storage(
+ 'TestStorage',
+ charging=fx.Flow('Q_th_in', bus='Fernwärme', size=20),
+ discharging=fx.Flow('Q_th_out', bus='Fernwärme', size=20),
+ capacity_in_flow_hours=30, # 30 kWh storage capacity
+ initial_charge_state=3,
+ prevent_simultaneous_charge_and_discharge=True,
+ relative_maximum_charge_state=np.array([0.14, 0.22, 0.3, 0.38, 0.46, 0.54, 0.62, 0.7, 0.78, 0.86]),
+ relative_minimum_charge_state=np.array([0.07, 0.11, 0.15, 0.19, 0.23, 0.27, 0.31, 0.35, 0.39, 0.43]),
+ )
+
+ flow_system.add_elements(storage)
+ model = create_linopy_model(flow_system)
+
+ # Check that all expected variables exist - linopy model variables are accessed by indexing
+ expected_variables = {
+ 'TestStorage(Q_th_in)|flow_rate',
+ 'TestStorage(Q_th_in)|total_flow_hours',
+ 'TestStorage(Q_th_out)|flow_rate',
+ 'TestStorage(Q_th_out)|total_flow_hours',
+ 'TestStorage|charge_state',
+ 'TestStorage|netto_discharge',
+ }
+ for var_name in expected_variables:
+ assert var_name in model.variables, f'Missing variable: {var_name}'
+
+ # Check that all expected constraints exist - linopy model constraints are accessed by indexing
+ expected_constraints = {
+ 'TestStorage(Q_th_in)|total_flow_hours',
+ 'TestStorage(Q_th_out)|total_flow_hours',
+ 'TestStorage|netto_discharge',
+ 'TestStorage|charge_state',
+ 'TestStorage|initial_charge_state',
+ }
+ for con_name in expected_constraints:
+ assert con_name in model.constraints, f'Missing constraint: {con_name}'
+
+ # Check variable properties
+ assert_var_equal(
+ model['TestStorage(Q_th_in)|flow_rate'], model.add_variables(lower=0, upper=20, coords=model.get_coords())
+ )
+ assert_var_equal(
+ model['TestStorage(Q_th_out)|flow_rate'], model.add_variables(lower=0, upper=20, coords=model.get_coords())
+ )
+ assert_var_equal(
+ model['TestStorage|charge_state'],
+ model.add_variables(
+ lower=storage.relative_minimum_charge_state.reindex(
+ time=model.get_coords(extra_timestep=True)['time']
+ ).ffill('time')
+ * 30,
+ upper=storage.relative_maximum_charge_state.reindex(
+ time=model.get_coords(extra_timestep=True)['time']
+ ).ffill('time')
+ * 30,
+ coords=model.get_coords(extra_timestep=True),
+ ),
+ )
+
+ # Check constraint formulations
+ assert_conequal(
+ model.constraints['TestStorage|netto_discharge'],
+ model.variables['TestStorage|netto_discharge']
+ == model.variables['TestStorage(Q_th_out)|flow_rate'] - model.variables['TestStorage(Q_th_in)|flow_rate'],
+ )
+
+ charge_state = model.variables['TestStorage|charge_state']
+ assert_conequal(
+ model.constraints['TestStorage|charge_state'],
+ charge_state.isel(time=slice(1, None))
+ == charge_state.isel(time=slice(None, -1))
+ + model.variables['TestStorage(Q_th_in)|flow_rate'] * model.hours_per_step
+ - model.variables['TestStorage(Q_th_out)|flow_rate'] * model.hours_per_step,
+ )
+ # Check initial charge state constraint
+ assert_conequal(
+ model.constraints['TestStorage|initial_charge_state'],
+ model.variables['TestStorage|charge_state'].isel(time=0) == 3,
+ )
+
+ def test_storage_with_investment(self, basic_flow_system_linopy_coords, coords_config):
"""Test storage with investment parameters."""
- flow_system = basic_flow_system_linopy
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
# Create storage with investment parameters
storage = fx.Storage(
@@ -177,7 +261,11 @@ def test_storage_with_investment(self, basic_flow_system_linopy):
charging=fx.Flow('Q_th_in', bus='Fernwärme', size=20),
discharging=fx.Flow('Q_th_out', bus='Fernwärme', size=20),
capacity_in_flow_hours=fx.InvestParameters(
- fix_effects=100, specific_effects=10, minimum_size=20, maximum_size=100, optional=True
+ effects_of_investment=100,
+ effects_of_investment_per_size=10,
+ minimum_size=20,
+ maximum_size=100,
+ mandatory=False,
),
initial_charge_state=0,
eta_charge=0.9,
@@ -193,29 +281,35 @@ def test_storage_with_investment(self, basic_flow_system_linopy):
for var_name in {
'InvestStorage|charge_state',
'InvestStorage|size',
- 'InvestStorage|is_invested',
+ 'InvestStorage|invested',
}:
assert var_name in model.variables, f'Missing investment variable: {var_name}'
# Check investment constraints exist
- for con_name in {'InvestStorage|is_invested_ub', 'InvestStorage|is_invested_lb'}:
+ for con_name in {'InvestStorage|size|ub', 'InvestStorage|size|lb'}:
assert con_name in model.constraints, f'Missing investment constraint: {con_name}'
# Check variable properties
- assert_var_equal(model['InvestStorage|size'], model.add_variables(lower=0, upper=100))
- assert_var_equal(model['InvestStorage|is_invested'], model.add_variables(binary=True))
+ assert_var_equal(
+ model['InvestStorage|size'],
+ model.add_variables(lower=0, upper=100, coords=model.get_coords(['period', 'scenario'])),
+ )
+ assert_var_equal(
+ model['InvestStorage|invested'],
+ model.add_variables(binary=True, coords=model.get_coords(['period', 'scenario'])),
+ )
assert_conequal(
- model.constraints['InvestStorage|is_invested_ub'],
- model.variables['InvestStorage|size'] <= model.variables['InvestStorage|is_invested'] * 100,
+ model.constraints['InvestStorage|size|ub'],
+ model.variables['InvestStorage|size'] <= model.variables['InvestStorage|invested'] * 100,
)
assert_conequal(
- model.constraints['InvestStorage|is_invested_lb'],
- model.variables['InvestStorage|size'] >= model.variables['InvestStorage|is_invested'] * 20,
+ model.constraints['InvestStorage|size|lb'],
+ model.variables['InvestStorage|size'] >= model.variables['InvestStorage|invested'] * 20,
)
- def test_storage_with_final_state_constraints(self, basic_flow_system_linopy):
+ def test_storage_with_final_state_constraints(self, basic_flow_system_linopy_coords, coords_config):
"""Test storage with final state constraints."""
- flow_system = basic_flow_system_linopy
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
# Create storage with final state constraints
storage = fx.Storage(
@@ -258,9 +352,9 @@ def test_storage_with_final_state_constraints(self, basic_flow_system_linopy):
model.variables['FinalStateStorage|charge_state'].isel(time=-1) <= 25,
)
- def test_storage_cyclic_initialization(self, basic_flow_system_linopy):
+ def test_storage_cyclic_initialization(self, basic_flow_system_linopy_coords, coords_config):
"""Test storage with cyclic initialization."""
- flow_system = basic_flow_system_linopy
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
# Create storage with cyclic initialization
storage = fx.Storage(
@@ -291,9 +385,9 @@ def test_storage_cyclic_initialization(self, basic_flow_system_linopy):
'prevent_simultaneous',
[True, False],
)
- def test_simultaneous_charge_discharge(self, basic_flow_system_linopy, prevent_simultaneous):
+ def test_simultaneous_charge_discharge(self, basic_flow_system_linopy_coords, coords_config, prevent_simultaneous):
"""Test prevent_simultaneous_charge_and_discharge parameter."""
- flow_system = basic_flow_system_linopy
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
# Create storage with or without simultaneous charge/discharge prevention
storage = fx.Storage(
@@ -321,35 +415,41 @@ def test_simultaneous_charge_discharge(self, basic_flow_system_linopy, prevent_s
assert var_name in model.variables, f'Missing binary variable: {var_name}'
# Check for constraints that enforce either charging or discharging
- constraint_name = 'SimultaneousStorage|PreventSimultaneousUsage|prevent_simultaneous_use'
+ constraint_name = 'SimultaneousStorage|prevent_simultaneous_use'
assert constraint_name in model.constraints, 'Missing constraint to prevent simultaneous operation'
assert_conequal(
- model.constraints['SimultaneousStorage|PreventSimultaneousUsage|prevent_simultaneous_use'],
+ model.constraints['SimultaneousStorage|prevent_simultaneous_use'],
model.variables['SimultaneousStorage(Q_th_in)|on'] + model.variables['SimultaneousStorage(Q_th_out)|on']
- <= 1.1,
+ <= 1,
)
@pytest.mark.parametrize(
- 'optional,minimum_size,expected_vars,expected_constraints',
+ 'mandatory,minimum_size,expected_vars,expected_constraints',
[
- (True, None, {'InvestStorage|is_invested'}, {'InvestStorage|is_invested_lb'}),
- (True, 20, {'InvestStorage|is_invested'}, {'InvestStorage|is_invested_lb'}),
- (False, None, set(), set()),
- (False, 20, set(), set()),
+ (False, None, {'InvestStorage|invested'}, {'InvestStorage|size|lb'}),
+ (False, 20, {'InvestStorage|invested'}, {'InvestStorage|size|lb'}),
+ (True, None, set(), set()),
+ (True, 20, set(), set()),
],
)
def test_investment_parameters(
- self, basic_flow_system_linopy, optional, minimum_size, expected_vars, expected_constraints
+ self,
+ basic_flow_system_linopy_coords,
+ coords_config,
+ mandatory,
+ minimum_size,
+ expected_vars,
+ expected_constraints,
):
"""Test different investment parameter combinations."""
- flow_system = basic_flow_system_linopy
+ flow_system, coords_config = basic_flow_system_linopy_coords, coords_config
# Create investment parameters
invest_params = {
- 'fix_effects': 100,
- 'specific_effects': 10,
- 'optional': optional,
+ 'effects_of_investment': 100,
+ 'effects_of_investment_per_size': 10,
+ 'mandatory': mandatory,
}
if minimum_size is not None:
invest_params['minimum_size'] = minimum_size
@@ -371,20 +471,18 @@ def test_investment_parameters(
# Check that expected variables exist
for var_name in expected_vars:
- if optional:
+ if not mandatory: # Optional investment (mandatory=False)
assert var_name in model.variables, f'Expected variable {var_name} not found'
# Check that expected constraints exist
for constraint_name in expected_constraints:
- if optional:
+ if not mandatory: # Optional investment (mandatory=False)
assert constraint_name in model.constraints, f'Expected constraint {constraint_name} not found'
- # If optional is False, is_invested should be fixed to 1
- if not optional:
- # Check that the is_invested variable exists and is fixed to 1
- if 'InvestStorage|is_invested' in model.variables:
- var = model.variables['InvestStorage|is_invested']
+ # If mandatory is True, invested should be fixed to 1
+ if mandatory:
+ # Check that the invested variable exists and is fixed to 1
+ if 'InvestStorage|invested' in model.variables:
+ var = model.variables['InvestStorage|invested']
# Check if the lower and upper bounds are both 1
- assert var.upper == 1 and var.lower == 1, (
- 'is_invested variable should be fixed to 1 when optional=False'
- )
+ assert var.upper == 1 and var.lower == 1, 'invested variable should be fixed to 1 when mandatory=True'
diff --git a/tests/test_timeseries.py b/tests/test_timeseries.py
index 36ede05c4..e69de29bb 100644
--- a/tests/test_timeseries.py
+++ b/tests/test_timeseries.py
@@ -1,603 +0,0 @@
-import tempfile
-from pathlib import Path
-
-import numpy as np
-import pandas as pd
-import pytest
-import xarray as xr
-
-from flixopt.core import TimeSeries, TimeSeriesCollection, TimeSeriesData
-
-
-@pytest.fixture
-def sample_timesteps():
- """Create a sample time index with the required 'time' name."""
- return pd.date_range('2023-01-01', periods=5, freq='D', name='time')
-
-
-@pytest.fixture
-def simple_dataarray(sample_timesteps):
- """Create a simple DataArray with time dimension."""
- return xr.DataArray([10, 20, 30, 40, 50], coords={'time': sample_timesteps}, dims=['time'])
-
-
-@pytest.fixture
-def sample_timeseries(simple_dataarray):
- """Create a sample TimeSeries object."""
- return TimeSeries(simple_dataarray, name='Test Series')
-
-
-class TestTimeSeries:
- """Test suite for TimeSeries class."""
-
- def test_initialization(self, simple_dataarray):
- """Test basic initialization of TimeSeries."""
- ts = TimeSeries(simple_dataarray, name='Test Series')
-
- # Check basic properties
- assert ts.name == 'Test Series'
- assert ts.aggregation_weight is None
- assert ts.aggregation_group is None
-
- # Check data initialization
- assert isinstance(ts.stored_data, xr.DataArray)
- assert ts.stored_data.equals(simple_dataarray)
- assert ts.active_data.equals(simple_dataarray)
-
- # Check backup was created
- assert ts._backup.equals(simple_dataarray)
-
- # Check active timesteps
- assert ts.active_timesteps.equals(simple_dataarray.indexes['time'])
-
- def test_initialization_with_aggregation_params(self, simple_dataarray):
- """Test initialization with aggregation parameters."""
- ts = TimeSeries(
- simple_dataarray, name='Weighted Series', aggregation_weight=0.5, aggregation_group='test_group'
- )
-
- assert ts.name == 'Weighted Series'
- assert ts.aggregation_weight == 0.5
- assert ts.aggregation_group == 'test_group'
-
- def test_initialization_validation(self, sample_timesteps):
- """Test validation during initialization."""
- # Test missing time dimension
- invalid_data = xr.DataArray([1, 2, 3], dims=['invalid_dim'])
- with pytest.raises(ValueError, match='must have a "time" index'):
- TimeSeries(invalid_data, name='Invalid Series')
-
- # Test multi-dimensional data
- multi_dim_data = xr.DataArray(
- [[1, 2, 3], [4, 5, 6]], coords={'dim1': [0, 1], 'time': sample_timesteps[:3]}, dims=['dim1', 'time']
- )
- with pytest.raises(ValueError, match='dimensions of DataArray must be 1'):
- TimeSeries(multi_dim_data, name='Multi-dim Series')
-
- def test_active_timesteps_getter_setter(self, sample_timeseries, sample_timesteps):
- """Test active_timesteps getter and setter."""
- # Initial state should use all timesteps
- assert sample_timeseries.active_timesteps.equals(sample_timesteps)
-
- # Set to a subset
- subset_index = sample_timesteps[1:3]
- sample_timeseries.active_timesteps = subset_index
- assert sample_timeseries.active_timesteps.equals(subset_index)
-
- # Active data should reflect the subset
- assert sample_timeseries.active_data.equals(sample_timeseries.stored_data.sel(time=subset_index))
-
- # Reset to full index
- sample_timeseries.active_timesteps = None
- assert sample_timeseries.active_timesteps.equals(sample_timesteps)
-
- # Test invalid type
- with pytest.raises(TypeError, match='must be a pandas DatetimeIndex'):
- sample_timeseries.active_timesteps = 'invalid'
-
- def test_reset(self, sample_timeseries, sample_timesteps):
- """Test reset method."""
- # Set to subset first
- subset_index = sample_timesteps[1:3]
- sample_timeseries.active_timesteps = subset_index
-
- # Reset
- sample_timeseries.reset()
-
- # Should be back to full index
- assert sample_timeseries.active_timesteps.equals(sample_timesteps)
- assert sample_timeseries.active_data.equals(sample_timeseries.stored_data)
-
- def test_restore_data(self, sample_timeseries, simple_dataarray):
- """Test restore_data method."""
- # Modify the stored data
- new_data = xr.DataArray([1, 2, 3, 4, 5], coords={'time': sample_timeseries.active_timesteps}, dims=['time'])
-
- # Store original data for comparison
- original_data = sample_timeseries.stored_data
-
- # Set new data
- sample_timeseries.stored_data = new_data
- assert sample_timeseries.stored_data.equals(new_data)
-
- # Restore from backup
- sample_timeseries.restore_data()
-
- # Should be back to original data
- assert sample_timeseries.stored_data.equals(original_data)
- assert sample_timeseries.active_data.equals(original_data)
-
- def test_stored_data_setter(self, sample_timeseries, sample_timesteps):
- """Test stored_data setter with different data types."""
- # Test with a Series
- series_data = pd.Series([5, 6, 7, 8, 9], index=sample_timesteps)
- sample_timeseries.stored_data = series_data
- assert np.array_equal(sample_timeseries.stored_data.values, series_data.values)
-
- # Test with a single-column DataFrame
- df_data = pd.DataFrame({'col1': [15, 16, 17, 18, 19]}, index=sample_timesteps)
- sample_timeseries.stored_data = df_data
- assert np.array_equal(sample_timeseries.stored_data.values, df_data['col1'].values)
-
- # Test with a NumPy array
- array_data = np.array([25, 26, 27, 28, 29])
- sample_timeseries.stored_data = array_data
- assert np.array_equal(sample_timeseries.stored_data.values, array_data)
-
- # Test with a scalar
- sample_timeseries.stored_data = 42
- assert np.all(sample_timeseries.stored_data.values == 42)
-
- # Test with another DataArray
- another_dataarray = xr.DataArray([30, 31, 32, 33, 34], coords={'time': sample_timesteps}, dims=['time'])
- sample_timeseries.stored_data = another_dataarray
- assert sample_timeseries.stored_data.equals(another_dataarray)
-
- def test_stored_data_setter_no_change(self, sample_timeseries):
- """Test stored_data setter when data doesn't change."""
- # Get current data
- current_data = sample_timeseries.stored_data
- current_backup = sample_timeseries._backup
-
- # Set the same data
- sample_timeseries.stored_data = current_data
-
- # Backup shouldn't change
- assert sample_timeseries._backup is current_backup # Should be the same object
-
- def test_from_datasource(self, sample_timesteps):
- """Test from_datasource class method."""
- # Test with scalar
- ts_scalar = TimeSeries.from_datasource(42, 'Scalar Series', sample_timesteps)
- assert np.all(ts_scalar.stored_data.values == 42)
-
- # Test with Series
- series_data = pd.Series([1, 2, 3, 4, 5], index=sample_timesteps)
- ts_series = TimeSeries.from_datasource(series_data, 'Series Data', sample_timesteps)
- assert np.array_equal(ts_series.stored_data.values, series_data.values)
-
- # Test with aggregation parameters
- ts_with_agg = TimeSeries.from_datasource(
- series_data, 'Aggregated Series', sample_timesteps, aggregation_weight=0.7, aggregation_group='group1'
- )
- assert ts_with_agg.aggregation_weight == 0.7
- assert ts_with_agg.aggregation_group == 'group1'
-
- def test_to_json_from_json(self, sample_timeseries):
- """Test to_json and from_json methods."""
- # Test to_json (dictionary only)
- json_dict = sample_timeseries.to_json()
- assert json_dict['name'] == sample_timeseries.name
- assert 'data' in json_dict
- assert 'coords' in json_dict['data']
- assert 'time' in json_dict['data']['coords']
-
- # Test to_json with file saving
- with tempfile.TemporaryDirectory() as tmpdirname:
- filepath = Path(tmpdirname) / 'timeseries.json'
- sample_timeseries.to_json(filepath)
- assert filepath.exists()
-
- # Test from_json with file loading
- loaded_ts = TimeSeries.from_json(path=filepath)
- assert loaded_ts.name == sample_timeseries.name
- assert np.array_equal(loaded_ts.stored_data.values, sample_timeseries.stored_data.values)
-
- # Test from_json with dictionary
- loaded_ts_dict = TimeSeries.from_json(data=json_dict)
- assert loaded_ts_dict.name == sample_timeseries.name
- assert np.array_equal(loaded_ts_dict.stored_data.values, sample_timeseries.stored_data.values)
-
- # Test validation in from_json
- with pytest.raises(ValueError, match="one of 'path' or 'data'"):
- TimeSeries.from_json(data=json_dict, path='dummy.json')
-
- def test_all_equal(self, sample_timesteps):
- """Test all_equal property."""
- # All equal values
- equal_data = xr.DataArray([5, 5, 5, 5, 5], coords={'time': sample_timesteps}, dims=['time'])
- ts_equal = TimeSeries(equal_data, 'Equal Series')
- assert ts_equal.all_equal is True
-
- # Not all equal
- unequal_data = xr.DataArray([5, 5, 6, 5, 5], coords={'time': sample_timesteps}, dims=['time'])
- ts_unequal = TimeSeries(unequal_data, 'Unequal Series')
- assert ts_unequal.all_equal is False
-
- def test_arithmetic_operations(self, sample_timeseries):
- """Test arithmetic operations."""
- # Create a second TimeSeries for testing
- data2 = xr.DataArray([1, 2, 3, 4, 5], coords={'time': sample_timeseries.active_timesteps}, dims=['time'])
- ts2 = TimeSeries(data2, 'Second Series')
-
- # Test operations between two TimeSeries objects
- assert np.array_equal(
- (sample_timeseries + ts2).values, sample_timeseries.active_data.values + ts2.active_data.values
- )
- assert np.array_equal(
- (sample_timeseries - ts2).values, sample_timeseries.active_data.values - ts2.active_data.values
- )
- assert np.array_equal(
- (sample_timeseries * ts2).values, sample_timeseries.active_data.values * ts2.active_data.values
- )
- assert np.array_equal(
- (sample_timeseries / ts2).values, sample_timeseries.active_data.values / ts2.active_data.values
- )
-
- # Test operations with DataArrays
- assert np.array_equal((sample_timeseries + data2).values, sample_timeseries.active_data.values + data2.values)
- assert np.array_equal((data2 + sample_timeseries).values, data2.values + sample_timeseries.active_data.values)
-
- # Test operations with scalars
- assert np.array_equal((sample_timeseries + 5).values, sample_timeseries.active_data.values + 5)
- assert np.array_equal((5 + sample_timeseries).values, 5 + sample_timeseries.active_data.values)
-
- # Test unary operations
- assert np.array_equal((-sample_timeseries).values, -sample_timeseries.active_data.values)
- assert np.array_equal((+sample_timeseries).values, +sample_timeseries.active_data.values)
- assert np.array_equal((abs(sample_timeseries)).values, abs(sample_timeseries.active_data.values))
-
- def test_comparison_operations(self, sample_timesteps):
- """Test comparison operations."""
- data1 = xr.DataArray([10, 20, 30, 40, 50], coords={'time': sample_timesteps}, dims=['time'])
- data2 = xr.DataArray([5, 10, 15, 20, 25], coords={'time': sample_timesteps}, dims=['time'])
-
- ts1 = TimeSeries(data1, 'Series 1')
- ts2 = TimeSeries(data2, 'Series 2')
-
- # Test __gt__ method
- assert (ts1 > ts2).all().item()
-
- # Test with mixed values
- data3 = xr.DataArray([5, 25, 15, 45, 25], coords={'time': sample_timesteps}, dims=['time'])
- ts3 = TimeSeries(data3, 'Series 3')
-
- assert not (ts1 > ts3).all().item() # Not all values in ts1 are greater than ts3
-
- def test_numpy_ufunc(self, sample_timeseries):
- """Test numpy ufunc compatibility."""
- # Test basic numpy functions
- assert np.array_equal(np.add(sample_timeseries, 5).values, np.add(sample_timeseries.active_data, 5).values)
-
- assert np.array_equal(
- np.multiply(sample_timeseries, 2).values, np.multiply(sample_timeseries.active_data, 2).values
- )
-
- # Test with two TimeSeries objects
- data2 = xr.DataArray([1, 2, 3, 4, 5], coords={'time': sample_timeseries.active_timesteps}, dims=['time'])
- ts2 = TimeSeries(data2, 'Second Series')
-
- assert np.array_equal(
- np.add(sample_timeseries, ts2).values, np.add(sample_timeseries.active_data, ts2.active_data).values
- )
-
- def test_sel_and_isel_properties(self, sample_timeseries):
- """Test sel and isel properties."""
- # Test that sel property works
- selected = sample_timeseries.sel(time=sample_timeseries.active_timesteps[0])
- assert selected.item() == sample_timeseries.active_data.values[0]
-
- # Test that isel property works
- indexed = sample_timeseries.isel(time=0)
- assert indexed.item() == sample_timeseries.active_data.values[0]
-
-
-@pytest.fixture
-def sample_collection(sample_timesteps):
- """Create a sample TimeSeriesCollection."""
- return TimeSeriesCollection(sample_timesteps)
-
-
-@pytest.fixture
-def populated_collection(sample_collection):
- """Create a TimeSeriesCollection with test data."""
- # Add a constant time series
- sample_collection.create_time_series(42, 'constant_series')
-
- # Add a varying time series
- varying_data = np.array([10, 20, 30, 40, 50])
- sample_collection.create_time_series(varying_data, 'varying_series')
-
- # Add a time series with extra timestep
- sample_collection.create_time_series(
- np.array([1, 2, 3, 4, 5, 6]), 'extra_timestep_series', needs_extra_timestep=True
- )
-
- # Add series with aggregation settings
- sample_collection.create_time_series(
- TimeSeriesData(np.array([5, 5, 5, 5, 5]), agg_group='group1'), 'group1_series1'
- )
- sample_collection.create_time_series(
- TimeSeriesData(np.array([6, 6, 6, 6, 6]), agg_group='group1'), 'group1_series2'
- )
- sample_collection.create_time_series(
- TimeSeriesData(np.array([10, 10, 10, 10, 10]), agg_weight=0.5), 'weighted_series'
- )
-
- return sample_collection
-
-
-class TestTimeSeriesCollection:
- """Test suite for TimeSeriesCollection."""
-
- def test_initialization(self, sample_timesteps):
- """Test basic initialization."""
- collection = TimeSeriesCollection(sample_timesteps)
-
- assert collection.all_timesteps.equals(sample_timesteps)
- assert len(collection.all_timesteps_extra) == len(sample_timesteps) + 1
- assert isinstance(collection.all_hours_per_timestep, xr.DataArray)
- assert len(collection) == 0
-
- def test_initialization_with_custom_hours(self, sample_timesteps):
- """Test initialization with custom hour settings."""
- # Test with last timestep duration
- last_timestep_hours = 12
- collection = TimeSeriesCollection(sample_timesteps, hours_of_last_timestep=last_timestep_hours)
-
- # Verify the last timestep duration
- extra_step_delta = collection.all_timesteps_extra[-1] - collection.all_timesteps_extra[-2]
- assert extra_step_delta == pd.Timedelta(hours=last_timestep_hours)
-
- # Test with previous timestep duration
- hours_per_step = 8
- collection2 = TimeSeriesCollection(sample_timesteps, hours_of_previous_timesteps=hours_per_step)
-
- assert collection2.hours_of_previous_timesteps == hours_per_step
-
- def test_create_time_series(self, sample_collection):
- """Test creating time series."""
- # Test scalar
- ts1 = sample_collection.create_time_series(42, 'scalar_series')
- assert ts1.name == 'scalar_series'
- assert np.all(ts1.active_data.values == 42)
-
- # Test numpy array
- data = np.array([1, 2, 3, 4, 5])
- ts2 = sample_collection.create_time_series(data, 'array_series')
- assert np.array_equal(ts2.active_data.values, data)
-
- # Test with TimeSeriesData
- ts3 = sample_collection.create_time_series(TimeSeriesData(10, agg_weight=0.7), 'weighted_series')
- assert ts3.aggregation_weight == 0.7
-
- # Test with extra timestep
- ts4 = sample_collection.create_time_series(5, 'extra_series', needs_extra_timestep=True)
- assert ts4.needs_extra_timestep
- assert len(ts4.active_data) == len(sample_collection.timesteps_extra)
-
- # Test duplicate name
- with pytest.raises(ValueError, match='already exists'):
- sample_collection.create_time_series(1, 'scalar_series')
-
- def test_access_time_series(self, populated_collection):
- """Test accessing time series."""
- # Test __getitem__
- ts = populated_collection['varying_series']
- assert ts.name == 'varying_series'
-
- # Test __contains__ with string
- assert 'constant_series' in populated_collection
- assert 'nonexistent_series' not in populated_collection
-
- # Test __contains__ with TimeSeries object
- assert populated_collection['varying_series'] in populated_collection
-
- # Test __iter__
- names = [ts.name for ts in populated_collection]
- assert len(names) == 6
- assert 'varying_series' in names
-
- # Test access to non-existent series
- with pytest.raises(KeyError):
- populated_collection['nonexistent_series']
-
- def test_constants_and_non_constants(self, populated_collection):
- """Test constants and non_constants properties."""
- # Test constants
- constants = populated_collection.constants
- assert len(constants) == 4 # constant_series, group1_series1, group1_series2, weighted_series
- assert all(ts.all_equal for ts in constants)
-
- # Test non_constants
- non_constants = populated_collection.non_constants
- assert len(non_constants) == 2 # varying_series, extra_timestep_series
- assert all(not ts.all_equal for ts in non_constants)
-
- # Test modifying a series changes the results
- populated_collection['constant_series'].stored_data = np.array([1, 2, 3, 4, 5])
- updated_constants = populated_collection.constants
- assert len(updated_constants) == 3 # One less constant
- assert 'constant_series' not in [ts.name for ts in updated_constants]
-
- def test_timesteps_properties(self, populated_collection, sample_timesteps):
- """Test timestep-related properties."""
- # Test default (all) timesteps
- assert populated_collection.timesteps.equals(sample_timesteps)
- assert len(populated_collection.timesteps_extra) == len(sample_timesteps) + 1
-
- # Test activating a subset
- subset = sample_timesteps[1:3]
- populated_collection.activate_timesteps(subset)
-
- assert populated_collection.timesteps.equals(subset)
- assert len(populated_collection.timesteps_extra) == len(subset) + 1
-
- # Check that time series were updated
- assert populated_collection['varying_series'].active_timesteps.equals(subset)
- assert populated_collection['extra_timestep_series'].active_timesteps.equals(
- populated_collection.timesteps_extra
- )
-
- # Test reset
- populated_collection.reset()
- assert populated_collection.timesteps.equals(sample_timesteps)
-
- def test_to_dataframe_and_dataset(self, populated_collection):
- """Test conversion to DataFrame and Dataset."""
- # Test to_dataset
- ds = populated_collection.to_dataset()
- assert isinstance(ds, xr.Dataset)
- assert len(ds.data_vars) == 6
-
- # Test to_dataframe with different filters
- df_all = populated_collection.to_dataframe(filtered='all')
- assert len(df_all.columns) == 6
-
- df_constant = populated_collection.to_dataframe(filtered='constant')
- assert len(df_constant.columns) == 4
-
- df_non_constant = populated_collection.to_dataframe(filtered='non_constant')
- assert len(df_non_constant.columns) == 2
-
- # Test invalid filter
- with pytest.raises(ValueError):
- populated_collection.to_dataframe(filtered='invalid')
-
- def test_calculate_aggregation_weights(self, populated_collection):
- """Test aggregation weight calculation."""
- weights = populated_collection.calculate_aggregation_weights()
-
- # Group weights should be 0.5 each (1/2)
- assert populated_collection.group_weights['group1'] == 0.5
-
- # Series in group1 should have weight 0.5
- assert weights['group1_series1'] == 0.5
- assert weights['group1_series2'] == 0.5
-
- # Series with explicit weight should have that weight
- assert weights['weighted_series'] == 0.5
-
- # Series without group or weight should have weight 1
- assert weights['constant_series'] == 1
-
- def test_insert_new_data(self, populated_collection, sample_timesteps):
- """Test inserting new data."""
- # Create new data
- new_data = pd.DataFrame(
- {
- 'constant_series': [100, 100, 100, 100, 100],
- 'varying_series': [5, 10, 15, 20, 25],
- # extra_timestep_series is omitted to test partial updates
- },
- index=sample_timesteps,
- )
-
- # Insert data
- populated_collection.insert_new_data(new_data)
-
- # Verify updates
- assert np.all(populated_collection['constant_series'].active_data.values == 100)
- assert np.array_equal(populated_collection['varying_series'].active_data.values, np.array([5, 10, 15, 20, 25]))
-
- # Series not in the DataFrame should be unchanged
- assert np.array_equal(
- populated_collection['extra_timestep_series'].active_data.values[:-1], np.array([1, 2, 3, 4, 5])
- )
-
- # Test with mismatched index
- bad_index = pd.date_range('2023-02-01', periods=5, freq='D', name='time')
- bad_data = pd.DataFrame({'constant_series': [1, 1, 1, 1, 1]}, index=bad_index)
-
- with pytest.raises(ValueError, match='must match collection timesteps'):
- populated_collection.insert_new_data(bad_data)
-
- def test_restore_data(self, populated_collection):
- """Test restoring original data."""
- # Capture original data
- original_values = {name: ts.stored_data.copy() for name, ts in populated_collection.time_series_data.items()}
-
- # Modify data
- new_data = pd.DataFrame(
- {
- name: np.ones(len(populated_collection.timesteps)) * 999
- for name in populated_collection.time_series_data
- if not populated_collection[name].needs_extra_timestep
- },
- index=populated_collection.timesteps,
- )
-
- populated_collection.insert_new_data(new_data)
-
- # Verify data was changed
- assert np.all(populated_collection['constant_series'].active_data.values == 999)
-
- # Restore data
- populated_collection.restore_data()
-
- # Verify data was restored
- for name, original in original_values.items():
- restored = populated_collection[name].stored_data
- assert np.array_equal(restored.values, original.values)
-
- def test_class_method_with_uniform_timesteps(self):
- """Test the with_uniform_timesteps class method."""
- collection = TimeSeriesCollection.with_uniform_timesteps(
- start_time=pd.Timestamp('2023-01-01'), periods=24, freq='h', hours_per_step=1
- )
-
- assert len(collection.timesteps) == 24
- assert collection.hours_of_previous_timesteps == 1
- assert (collection.timesteps[1] - collection.timesteps[0]) == pd.Timedelta(hours=1)
-
- def test_hours_per_timestep(self, populated_collection):
- """Test hours_per_timestep calculation."""
- # Standard case - uniform timesteps
- hours = populated_collection.hours_per_timestep.values
- assert np.allclose(hours, 24) # Default is daily timesteps
-
- # Create non-uniform timesteps
- non_uniform_times = pd.DatetimeIndex(
- [
- pd.Timestamp('2023-01-01'),
- pd.Timestamp('2023-01-02'),
- pd.Timestamp('2023-01-03 12:00:00'), # 1.5 days from previous
- pd.Timestamp('2023-01-04'), # 0.5 days from previous
- pd.Timestamp('2023-01-06'), # 2 days from previous
- ],
- name='time',
- )
-
- collection = TimeSeriesCollection(non_uniform_times)
- hours = collection.hours_per_timestep.values
-
- # Expected hours between timestamps
- expected = np.array([24, 36, 12, 48, 48])
- assert np.allclose(hours, expected)
-
- def test_validation_and_errors(self, sample_timesteps):
- """Test validation and error handling."""
- # Test non-DatetimeIndex
- with pytest.raises(TypeError, match='must be a pandas DatetimeIndex'):
- TimeSeriesCollection(pd.Index([1, 2, 3, 4, 5]))
-
- # Test too few timesteps
- with pytest.raises(ValueError, match='must contain at least 2 timestamps'):
- TimeSeriesCollection(pd.DatetimeIndex([pd.Timestamp('2023-01-01')], name='time'))
-
- # Test invalid active_timesteps
- collection = TimeSeriesCollection(sample_timesteps)
- invalid_timesteps = pd.date_range('2024-01-01', periods=3, freq='D', name='time')
-
- with pytest.raises(ValueError, match='must be a subset'):
- collection.activate_timesteps(invalid_timesteps)
diff --git a/tests/todos.txt b/tests/todos.txt
deleted file mode 100644
index d4628c259..000000000
--- a/tests/todos.txt
+++ /dev/null
@@ -1,5 +0,0 @@
-# testing of
- # abschnittsweise linear testen
- # Komponenten mit offenen Flows
- # Binärvariablen ohne max-Wert-Vorgabe des Flows (Binärungenauigkeitsproblem)
- # Medien-zulässigkeit