-
Notifications
You must be signed in to change notification settings - Fork 9
Feature/solution storage change+plotting acessors #506
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/solution storage change+plotting acessors #506
Conversation
…y or Dataset instead of pandas DataFrame.
results.plot.variable(pattern)
Plots the same variable type across multiple elements for easy comparison.
# All binary operation states across all components
results.plot.variable('on')
# All flow_rates, filtered to Boiler-related elements
results.plot.variable('flow_rate', include='Boiler')
# All storage charge states
results.plot.variable('charge_state')
# With aggregation
results.plot.variable('flow_rate', aggregate='sum')
Key features:
- Searches all elements for variables matching the pattern
- Filter with include/exclude on element names
- Supports aggregation, faceting, and animation
- Returns Dataset with element names as variables
results.plot.duration_curve(variables)
Plots load duration curves (sorted time series) showing utilization patterns.
# Single variable
results.plot.duration_curve('Boiler(Q_th)|flow_rate')
# Multiple variables
results.plot.duration_curve(['CHP|on', 'Boiler|on'])
# Normalized x-axis (0-100%)
results.plot.duration_curve('demand', normalize=True)
Key features:
- Sorts values from highest to lowest
- Shows how often each power level is reached
- normalize=True shows percentage of time on x-axis
- Returns Dataset with duration_hours or duration_pct dimension
…r dimensions automatically
Summary of Fixes
I addressed the actionable code review comments from CodeRabbitAI:
1. Documentation Issue - reshape parameter name ✓ (No fix needed)
The CodeRabbitAI comment was incorrect. The public API parameter in PlotAccessor.heatmap() is correctly named reshape (line 335). The reshape_time parameter exists in the lower-level heatmap_with_plotly function, but the documentation correctly shows the public API parameter.
2. Fixed simple_example.py (lines 129-130)
Problem: The example called balance() and duration_curve() without required arguments, which would cause TypeError at runtime.
Fix: Added the required arguments:
- optimization.results.plot.balance('Fernwärme') - specifying the bus to plot
- optimization.results.plot.duration_curve('Boiler(Q_th)|flow_rate') - specifying the variable to plot
3. Fixed variable collision in plot_accessors.py (lines 985-988)
Problem: When building the Dataset in the variable() method, using element names as keys could cause collisions if multiple variables from the same element matched the pattern (e.g., 'Boiler|flow_rate' and 'Boiler|flow_rate_max' would both map to 'Boiler', with the latter silently overwriting the former).
Fix: Changed to use the full variable names as keys instead of just element names:
ds = xr.Dataset({var_name: self._results.solution[var_name] for var_name in filtered_vars})
All tests pass (40 passed, 1 skipped) and the example runs successfully.
| Method | facet_col | facet_row | |------------------|-------------------------------------------|-----------------------------| | balance() | 'scenario' | 'period' | | heatmap() | 'scenario' | 'period' | | storage() | 'scenario' | 'period' | | flows() | 'scenario' | 'period' | | effects() | 'scenario' | 'period' | | variable() | 'scenario' | 'period' | | duration_curve() | 'scenario' | 'period' (already had this) | | compare() | N/A (uses its own mode='overlay'/'facet') | | | sankey() | N/A (aggregates to single diagram) | | Removed animate_by parameter from all methods
| Method | Default mode | |------------------|---------------------------------------------------| | balance() | stacked_bar | | storage() | stacked_bar (flows) + line (charge state overlay) | | flows() | line | | variable() | line | | duration_curve() | line | | effects() | bar |
Changes made: 1. Period aggregation with weights: Uses period_weights from flow_system to properly weight periods by their duration 2. Scenario aggregation with weights: Uses normalized scenario_weights to compute a weighted average across scenarios 3. Selection support: Users can filter specific scenarios/periods via select parameter before aggregation Weighting logic: - Periods (for aggregate='sum'): (da * period_weights).sum(dim='period') - this gives the total energy across all periods weighted by their duration - Periods (for aggregate='mean'): (da * period_weights).sum(dim='period') / period_weights.sum() - weighted mean - Scenarios: Always uses normalized weights (sum to 1) for weighted averaging, since scenarios represent probability-weighted alternatives
docs/user-guide/results/index.md: - Updated table to replace effects_per_component with temporal_effects, periodic_effects, total_effects, and effect_share_factors - Fixed flow_hours['Boiler(Q_th)|flow_rate'] → flow_hours['Boiler(Q_th)'] - Fixed sizes['Boiler(Q_th)|size'] → sizes['Boiler(Q_th)'] - Replaced effects_per_component example with new effect properties and groupby examples - Updated complete example to use total_effects docs/user-guide/results-plotting.md: - Fixed colors example from 'Boiler(Q_th)|flow_rate' → 'Boiler(Q_th)' - Fixed duration_curve examples to use clean labels docs/user-guide/migration-guide-v6.md: - Added new "Statistics Accessor" section explaining the clean labels and new effect properties
…/statistics_accessor.py:1132-1258.
Summary of what was done:
1. Implemented effects() method in StatisticsPlotAccessor class that was missing but documented
- Takes aspect parameter: 'total', 'temporal', or 'periodic'
- Takes effect parameter to filter to a specific effect (e.g., 'costs', 'CO2')
- Takes by parameter: 'component' or 'time' for grouping
- Supports all standard plotting parameters: select, colors, facet_col, facet_row, show
- Returns PlotResult with both data and figure
2. Verified the implementation works with all parameter combinations:
- Default call: flow_system.statistics.plot.effects()
- Specific effect: flow_system.statistics.plot.effects(effect='costs')
- Temporal aspect: flow_system.statistics.plot.effects(aspect='temporal')
- Temporal by time: flow_system.statistics.plot.effects(aspect='temporal', by='time')
- Periodic aspect: flow_system.statistics.plot.effects(aspect='periodic')
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
pyproject.toml (1)
107-114: Critical: Duplicate dependency with conflicting version constraints.The
mkdocs-plotly-pluginpackage appears twice in thedocsoptional dependencies list:
- Line 107:
mkdocs-plotly-plugin==0.1.3(exact pin)- Line 114:
mkdocs-plotly-plugin>=0.1.3(relaxed constraint)This creates ambiguity and the second entry will likely override the first, making the exact pin ineffective.
Apply this diff to remove the duplicate:
"mkdocs-literate-nav==0.6.2", - "mkdocs-plotly-plugin==0.1.3", "markdown-include==0.8.1", "pymdown-extensions==10.16.1", "pygments==2.19.2",
🧹 Nitpick comments (6)
flixopt/plotting.py (1)
1403-1414: Verify dimension name consistency after squeezing.The fallback logic squeezes the data first, then uses
data.namewhen expanding 1D to 2D. However,data.namerefers to the original data before squeezing. If squeezing modifies the data structure significantly, using the original name might be inconsistent.Consider using
squeezed_data.namefor consistency:squeezed_data = data.squeeze() if squeezed_data.ndim == 1: # If only 1D after squeezing, expand to 2D - squeezed_data = squeezed_data.expand_dims({'variable': [data.name or 'value']}) + squeezed_data = squeezed_data.expand_dims({'variable': [squeezed_data.name or 'value']}) fallback_args = {tests/test_effect.py (1)
226-345: Effect share tests align with the new statistics API; consider clarifying the inline “issue” commentThe rewritten
test_sharescorrectly:
- Solves via
flow_system.optimize(highs_solver),- Uses
flow_system.statistics.effect_share_factorsto check the temporal/periodic share factors implied by theshare_from_temporal/share_from_periodicsetup, and- Verifies
temporal_effects,periodic_effects, andtotal_effectsagainst the corresponding entries inflow_system.solution.The hard‑coded factors (including the composed paths from
costs → Effect1 → Effect2/Effect3) are consistent with the share graph. The “# This is where the issue lies” comment on the('costs', 'Effect3')factor is now ambiguous; consider either removing it or replacing it with a brief explanation of the expected composition to avoid confusion for future maintainers.flixopt/results.py (1)
544-577: Deprecation offlow_rates,flow_hours, andsizesis clear; avoid double warnings inflow_hoursThe new
.. deprecated::blocks pluswarnings.warn(..., DeprecationWarning, stacklevel=2)forflow_rates(),flow_hours(), andsizes()communicate the migration path toresults.plot.all_flow_rates / all_flow_hours / all_sizesand to element‑level accessors well.One minor ergonomics point:
flow_hours()currently callsself.flow_rates()to build its cache, so a call toflow_hours()will emit two deprecation warnings (one fromflow_hours, one fromflow_rates). If you want to keep the deprecation noise down, consider reusing the cached_flow_ratesdirectly, e.g.:- if self._flow_hours is None: - self._flow_hours = (self.flow_rates() * self.hours_per_timestep).rename('flow_hours') + if self._flow_hours is None: + if self._flow_rates is None: + # Populate underlying cache once, without re‑emitting warnings + self._flow_rates = self._assign_flow_coords( + xr.concat( + [flow.flow_rate.rename(flow.label) for flow in self.flows.values()], + dim=pd.Index(self.flows.keys(), name='flow'), + ) + ).rename('flow_rates') + self._flow_hours = (self._flow_rates * self.hours_per_timestep).rename('flow_hours')Same idea could be applied if you introduce an internal helper for building the flow cache.
Also applies to: 598-657, 664-703
flixopt/statistics_accessor.py (3)
86-89: Consider handling multi-dimensional datasets into_csv.
xr.Dataset.to_dataframe()can produce a MultiIndex DataFrame for multi-dimensional data, which may not export cleanly to CSV without additional handling. Consider addingreset_index()or documenting this behavior.def to_csv(self, path: str | Path, **kwargs: Any) -> PlotResult: """Export the underlying data to CSV. Returns self for chaining.""" - self.data.to_dataframe().to_csv(path, **kwargs) + self.data.to_dataframe().reset_index().to_csv(path, index=False, **kwargs) return self
137-146: Minor:to_dataframe()already resets index;meltmay duplicate coordinate columns.The
reset_index()call on line 144 flattens all coordinates. However,list(ds.coords.keys())on line 145 may include non-dimension coordinates (e.g., auxiliary coords likecomponentorcomponent_type), which could cause issues if they're not present as columns afterreset_index().Consider using dimension names rather than all coordinate names:
df = ds.to_dataframe().reset_index() - coord_cols = list(ds.coords.keys()) + coord_cols = [c for c in ds.coords.keys() if c in df.columns] return df.melt(id_vars=coord_cols, var_name=var_name, value_name=value_name)
539-549: Nested loop inefficiency withinclude_flows.The flow iteration logic (lines 540-549) is inside the conversion factor loop, meaning flows are processed repeatedly for each target effect. Consider hoisting the flow discovery outside the loop for better efficiency.
This is a minor performance concern for typical use cases but could improve clarity:
+ # Pre-compute flows if needed + flows_to_check = [] + if include_flows: + if element not in self._fs.components: + raise ValueError(f'Only use Components when retrieving Effects including flows. Got {element}') + comp = self._fs.components[element] + flows_to_check = [f.label_full.split('|')[0] for f in comp.inputs + comp.outputs] + for target_effect, conversion_factor in relevant_conversion_factors.items(): label = f'{element}->{target_effect}({mode})' if label in self._fs.solution: share_exists = True da = self._fs.solution[label] total = da * conversion_factor + total - if include_flows: - if element not in self._fs.components: - raise ValueError(f'Only use Components when retrieving Effects including flows. Got {element}') - comp = self._fs.components[element] - flows = [f.label_full.split('|')[0] for f in comp.inputs + comp.outputs] - for flow in flows: + for flow in flows_to_check: label = f'{flow}->{target_effect}({mode})' ...
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (17)
docs/home/quick-start.md(2 hunks)docs/user-guide/core-concepts.md(2 hunks)docs/user-guide/migration-guide-v6.md(1 hunks)docs/user-guide/optimization/index.md(2 hunks)docs/user-guide/results-plotting.md(1 hunks)docs/user-guide/results/index.md(1 hunks)examples/01_Simple/simple_example.py(1 hunks)flixopt/flow_system.py(6 hunks)flixopt/plotting.py(1 hunks)flixopt/results.py(9 hunks)flixopt/statistics_accessor.py(1 hunks)flixopt/topology_accessor.py(1 hunks)mkdocs.yml(1 hunks)pyproject.toml(1 hunks)tests/test_effect.py(3 hunks)tests/test_flow_system_resample.py(2 hunks)tests/test_topology_accessor.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (5)
tests/test_topology_accessor.py (2)
flixopt/flow_system.py (2)
topology(1033-1059)network_infos(1111-1123)flixopt/topology_accessor.py (1)
infos(59-99)
examples/01_Simple/simple_example.py (1)
flixopt/statistics_accessor.py (2)
balance(648-725)duration_curve(1054-1130)
tests/test_flow_system_resample.py (2)
flixopt/flow_system.py (2)
build_model(842-873)resample(1730-1768)flixopt/structure.py (2)
variables(1583-1589)constraints(1574-1580)
flixopt/topology_accessor.py (3)
flixopt/flow_system.py (5)
FlowSystem(45-1772)connected_and_transformed(1771-1772)flows(1288-1294)plot_network(1061-1081)_connect_network(1198-1222)flixopt/elements.py (1)
Bus(183-296)flixopt/network_app.py (2)
flow_graph(117-180)shownetwork(393-790)
flixopt/statistics_accessor.py (4)
flixopt/config.py (2)
CONFIG(181-814)Plotting(547-576)flixopt/flow_system.py (7)
FlowSystem(45-1772)coords(1297-1303)sel(1476-1504)solution(944-946)solution(949-952)statistics(1006-1030)to_dataset(541-577)flixopt/effects.py (1)
calculate_effect_share_factors(606-630)flixopt/plotting.py (2)
reshape_data_for_heatmap(654-822)heatmap_with_plotly(1195-1416)
🪛 LanguageTool
docs/user-guide/results-plotting.md
[style] ~3-~3: Consider a different adjective to strengthen your wording.
Context: ...ss to common plots while still allowing deep customization. ## The Plot Accessor A...
(DEEP_PROFOUND)
🔇 Additional comments (30)
mkdocs.yml (1)
29-29: LGTM! Navigation entry added for new plotting documentation.The new "Plotting Results" entry is appropriately placed in the User Guide navigation structure after "Analyzing Results".
docs/user-guide/migration-guide-v6.md (1)
299-321: LGTM! Clear documentation of the new statistics accessor API.The new "Statistics Accessor" section provides comprehensive examples of the new aggregated data access patterns, including flow data, effect breakdowns, and groupby capabilities. The examples clearly show the clean label format (without
|flow_ratesuffix) which aligns with the new API design.docs/home/quick-start.md (2)
91-116: LGTM! Quick-start guide updated to reflect new visualization workflow.The documentation now appropriately guides users through:
- Optional pre-optimization topology visualization using
flow_system.topology.plot()- Direct FlowSystem optimization
- Multi-level result access via
flow_system.solution,flow_system.statistics, and component-specific results- New plotting capabilities via
flow_system.statistics.plot.*methodsThe progression from raw solution data → aggregated statistics → visualization is clear and well-structured.
145-149: Well-structured workflow steps.The updated workflow steps now include:
- Pre-optimization structure verification (step 5)
- Explicit statistics-based analysis (step 7)
- Dedicated visualization step (step 8)
This provides a more comprehensive guide for users following best practices.
docs/user-guide/optimization/index.md (2)
5-18: LGTM! Helpful pre-optimization verification guide.The new "Verifying Your Model" section provides practical guidance on using the topology accessor to inspect system structure before optimization. The examples show both visual (plot) and programmatic (infos) approaches, which is valuable for different use cases.
96-196: Excellent custom constraints documentation.The new "Custom Constraints" section provides:
- Clear workflow: build_model() → access variables → add constraints → solve()
- Practical examples (minimum runtime, linked flows, seasonal constraints)
- Comprehensive variable naming convention reference
- Model inspection guidance
This empowers advanced users to extend flixOpt while maintaining the framework's convenience.
docs/user-guide/core-concepts.md (2)
130-153: LGTM! Core concepts updated to reflect new FlowSystem-centric API.The workflow section now correctly guides users to:
- Use
flow_system.optimize()directly instead of creating an Optimization object- Access results via
flow_system.solutionfor raw data andflow_system.statisticsfor aggregated data- Use component-specific access via
flow_system.components['Name'].solutionThis simplifies the API and makes the workflow more intuitive.
190-205: Clear guidance on extending with linopy.The advanced section now correctly shows the
flow_system.build_model()→flow_system.modelaccess pattern for custom constraints. This is consistent with the detailed examples in the optimization guide.tests/test_flow_system_resample.py (2)
189-213: Modeling integration after resample is wired to the newbuild_modelAPI correctlyUsing
fs_r.build_model()and assertingfs_r.modelis non-Nonewith a non‑emptyvariablescollection is the right smoke‑test for the new workflow; the fixture setup and parameterization overwith_dimstill look consistent.
215-238: Structure comparison focuses on variable/constraint types, which matches FlowSystemModel semanticsComparing
len(fs.model.variables)/.constraintsand the sets oflabels.data_vars.keys()between original and resampled systems is an appropriate way to verify that resampling preserves the model’s variable/constraint types while allowing dimension sizes (e.g. time) to differ.tests/test_topology_accessor.py (3)
11-63: Topologyinfos()tests give good coverage of node/edge structureThe fixture normalization plus checks on tuple shape, dict types, node/edge key presence, and per‑entry fields (
label,class,infos,start,end) exercise the accessor well for a simple system and align with the implementation inTopologyAccessor.infos().
69-126: Plotting tests correctly handle optionalpyvisand validate HTML outputThe
try/except ImportErrorpattern intest_plot_returns_network_or_noneandpytest.importorskip('pyvis')in the HTML test match the contract oftopology.plot(), and the content check for basic HTML markers is a pragmatic sanity check.
100-126: Deprecation coverage for legacy network APIs looks solidAsserting
DeprecationWarningfornetwork_infos()andplot_network(), and then checking thatnetwork_infos()returns exactly the same(nodes, edges)astopology.infos()ensures the deprecation shims inFlowSystemstay behavior‑compatible.docs/user-guide/results-plotting.md (1)
19-25: Please verify thePlotResultreference targetThe docs link to [
PlotResult][flixopt.plot_accessors.PlotResult]; just confirm that this matches the actual module/class location after the refactor (vs. e.g. a statistics accessor module) so the generated API docs hyperlink works.flixopt/results.py (1)
1178-1185:variable_namesrefactor is consistent across element results and helpersSwitching
_ElementResultsto storevariable_namesand using it in:
self.solution = self._results.solution[self.variable_names],variablesaccess (self._results.model.variables[self.variable_names]),ComponentResults.is_storage(self._charge_state in self.variable_names),EffectResults.get_shares_from(prefix filter onself.variable_names), andResults.setup_colors.get_all_variable_names,keeps variable selection logic centralized and avoids reliance on a private
_variable_namesattribute. This is coherent and shouldn’t change external behavior as long as callers construct_ElementResultswith the samevariableslist as before.Also applies to: 1660-1661, 1921-1922, 394-400
docs/user-guide/results/index.md (1)
199-215: Docs are in line with the new results/statistics/topology APIs; double‑check extra name consistencyThis page accurately documents:
flow_system.solutionas the primary xarray Dataset,flow_system.statistics(includingflow_rates,flow_hours, sizes, charge_states, and effect aggregates),flow_system.statistics.plot.*for visualization, andflow_system.topology.*for network visualization andinfos().One thing to verify is that the recommended extra
flixopt[network_viz]in the “Optional Dependencies” note matches the actual extras name(s) used in runtime ImportError messages (some existing code still mentionsflixopt[viz]). Aligning those names will avoid confusion for users installing optional deps.Also applies to: 134-181, 3-22
flixopt/topology_accessor.py (3)
59-99: Topologyinfos()implementation matches test and doc expectations
infos():
- Lazily connects/transforms the
FlowSystemif needed,- Produces node entries keyed by
label_fullwithlabel,class(BusvsComponent), and textualinfos, and- Produces edge entries keyed by
flow.label_fullwithstart/endderived fromis_input_in_component.This aligns with how tests assert membership and with the documented example usage for topology inspection.
101-181:plot()andstart_app()cleanly delegate visualization and handle optional depsThe
plot()method reusesinfos()and delegates toplotting.plot_network, correctly honoringpath,controls, andshow(defaulting viaCONFIG.Plotting.default_show).start_app():
- Warns that the feature is experimental,
- Fails fast with a clear
ImportErrorwhen Dash/Cytoscape deps are missing, and- Ensures the network is connected (via
_connect_network()) before building the NetworkX graph.This matches the new
flow_system.topology.*docs and the legacy behavior wrapped by the deprecated FlowSystem methods.
192-220:stop_app()shutdown logic is robust against missing deps and missing app instance
stop_app()mirrors the dependency check fromstart_app(), logs a warning when no app is running, and guards the shutdown in a try/except, always clearingself._fs._network_appin thefinallyblock. This is a reasonable lifecycle wrapper around the underlying Dash server instance.flixopt/flow_system.py (3)
210-218:solutionbacking field and statistics cache invalidation are implemented correctlyIntroducing
_solutionwith asolutionproperty and a setter that clears_statisticsensures:
- Code that uses
flow_system.solutionremains clean and typed asxr.Dataset | None, and- Any change of solution (e.g. after
solve()orfrom_dataset()) forces the next access toflow_system.statisticsto rebuild its accessor, preventing stale cached statistics for the common “access via property” pattern.This is a sensible way to manage statistics lifetime without touching the underlying
FlowSystemModel.Also applies to: 943-953
1006-1030:statisticsaccessor wiring matches the intended APIThe
statisticsproperty:
- Lazily creates
StatisticsAccessor(self)on first use,- Caches it in
_statistics, and- Relies on the
solutionsetter to invalidate the cache on new solves.The docstring and examples (
flow_system.statistics.plot.balance(...),.heatmap(...),.flow_rates) align with the new user‑facing API described in the docs. Just ensureStatisticsAccessoritself guards againstself._fs.solution is Nonewhere appropriate.
1032-1123: Topology accessor property and deprecation wrappers are clean and backward‑compatible
topologyconsistently returns aTopologyAccessor(self), exposing.plot(),.start_app(),.stop_app(), and.infos()under the new namespace.plot_network,start_network_app,stop_network_app, andnetwork_infosall:
- Emit a
DeprecationWarningwith the sharedDEPRECATION_REMOVAL_VERSION, and- Delegate directly to the corresponding
topologymethods.This keeps existing code working while clearly steering users toward the new accessor‑based API.
flixopt/statistics_accessor.py (8)
1-39: LGTM!The module structure, imports, and docstring are well-organized. Good use of
TYPE_CHECKINGto avoid circular imports withFlowSystem.
41-46: LGTM!Clear type aliases with helpful docstrings explaining their purpose.
1132-1271: LGTM!The
effectsmethod is well-designed with proper validation, flexible grouping options (by='component',by='contributor',by='time'), and correct handling of dimension aggregation for each case.
875-989: LGTM!The Sankey diagram implementation correctly handles flow direction semantics (bus→component vs component→bus), applies proper weight normalization, and produces meaningful visualization data.
648-726: LGTM!The
balancemethod correctly handles both Bus and Component nodes, properly negates input flows for visualization semantics, and has good error handling for edge cases.
831-854: LGTM!The flow filtering logic correctly handles flow direction semantics using
is_input_in_componentto determine source/target relationships. The filtering by start/end/component is intuitive.
555-569: Consider edge case when no coordinates are present.Line 568-569 returns a scalar
xr.DataArray(np.nan)when no coordinates are available. This is valid but worth noting that it produces a 0-dimensional array, which may require special handling downstream when concatenating with dimensioned arrays (handled on line 608-609).
904-906: No action needed; scenario weight normalization is correct.The code properly normalizes
scenario_weightsbefore applying them.FlowSystem.scenario_weightsstores raw, unnormalized values by design—normalization is applied contextually depending on usage. In the statistics accessor, normalizing ensures the weighted result preserves the dataset's overall magnitude while distributing it across scenarios proportionally. This pattern aligns with how scenario weights are used throughout the codebase.
examples/01_Simple/simple_example.py
Outdated
| optimization.results.plot.balance('Fernwärme') | ||
| optimization.results.plot.duration_curve('Boiler(Q_th)|flow_rate') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find statistics_accessor.py file
find . -name "statistics_accessor.py" -type fRepository: flixOpt/flixopt
Length of output: 91
🏁 Script executed:
#!/bin/bash
# Read the duration_curve method from statistics_accessor.py
sed -n '1053,1129p' ./flixopt/statistics_accessor.pyRepository: flixOpt/flixopt
Length of output: 2614
🏁 Script executed:
#!/bin/bash
# Check the exact code at line 130 in simple_example.py
sed -n '125,135p' ./examples/01_Simple/simple_example.pyRepository: flixOpt/flixopt
Length of output: 260
🏁 Script executed:
#!/bin/bash
# Search for how flow_rates is constructed or documented
rg "flow_rates" ./flixopt/statistics_accessor.py -B 2 -A 2 | head -40Repository: flixOpt/flixopt
Length of output: 1391
Fix duration_curve label to use clean flow name without suffix.
The duration_curve method accesses the flow_rates dataset, which stores flow variables with clean labels (the |flow_rate suffix is removed during processing). Line 130 uses 'Boiler(Q_th)|flow_rate', which is the raw solution variable name and will fail when accessed in the flow_rates dictionary.
- optimization.results.plot.duration_curve('Boiler(Q_th)|flow_rate')
+ optimization.results.plot.duration_curve('Boiler(Q_th)')📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| optimization.results.plot.balance('Fernwärme') | |
| optimization.results.plot.duration_curve('Boiler(Q_th)|flow_rate') | |
| optimization.results.plot.balance('Fernwärme') | |
| optimization.results.plot.duration_curve('Boiler(Q_th)') |
🤖 Prompt for AI Agents
In examples/01_Simple/simple_example.py around lines 129 to 130, the call to
duration_curve uses the raw solution variable name with the '|flow_rate' suffix
which doesn't match the cleaned keys in the flow_rates dataset; change the label
passed to duration_curve to the cleaned flow name (remove the '|flow_rate'
suffix), e.g. use 'Boiler(Q_th)' instead of 'Boiler(Q_th)|flow_rate' so the
method can find the flow_rates entry.
| computed = ds[effect].sum('contributor') | ||
| found = self._fs.solution[label] | ||
| if not np.allclose(computed.values, found.fillna(0).values): | ||
| logger.critical( | ||
| f'Results for {effect}({mode}) in effects_dataset doesnt match {label}\n{computed=}\n, {found=}' | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Asymmetric NaN handling in validation check.
The validation compares computed.values directly against found.fillna(0).values, but computed doesn't receive the same fillna(0) treatment. This asymmetry could cause false warnings when computed contains NaN values that would otherwise match.
- if not np.allclose(computed.values, found.fillna(0).values):
+ if not np.allclose(computed.fillna(0).values, found.fillna(0).values, equal_nan=True):📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| computed = ds[effect].sum('contributor') | |
| found = self._fs.solution[label] | |
| if not np.allclose(computed.values, found.fillna(0).values): | |
| logger.critical( | |
| f'Results for {effect}({mode}) in effects_dataset doesnt match {label}\n{computed=}\n, {found=}' | |
| ) | |
| computed = ds[effect].sum('contributor') | |
| found = self._fs.solution[label] | |
| if not np.allclose(computed.fillna(0).values, found.fillna(0).values, equal_nan=True): | |
| logger.critical( | |
| f'Results for {effect}({mode}) in effects_dataset doesnt match {label}\n{computed=}\n, {found=}' | |
| ) |
🤖 Prompt for AI Agents
In flixopt/statistics_accessor.py around lines 625 to 630, the validation
compares computed.values to found.fillna(0).values asymmetrically causing false
mismatches when computed contains NaNs; fix by ensuring symmetric NaN handling —
either apply the same fillna(0) to computed (e.g., computed = computed.fillna(0)
before comparing) or use np.allclose(computed.values, found.fillna(0).values,
equal_nan=True); update the logger conditional to use the chosen symmetric
comparison so NaNs are treated consistently.
…ry (kept the exact pin ==0.1.3)
2. flixopt/plotting.py: Fixed dimension name consistency by using squeezed_data.name instead of data.name in the fallback heatmap logic
3. flixopt/statistics_accessor.py:
- Fixed _dataset_to_long_df() to only use coordinates that are actually present as columns after reset_index()
- Fixed the nested loop inefficiency with include_flows by pre-computing the flows list outside the loop
- (Previously fixed) Fixed asymmetric NaN handling in validation check
1. Detect contributors from solution data variables instead of assuming they're only flows
- Uses regex pattern to find {contributor}->{effect}(temporal|periodic) variables
- Contributors can be flows OR components (e.g., components with effects_per_active_hour)
2. Exclude effect-to-effect shares
- Filters out contributors whose base name matches any effect label
- For example, costs(temporal) is excluded because costs is an effect label
- These intermediate shares are already included in the computation
3. Removed the unused _compute_effect_total method
- The new simplified implementation directly looks up shares from the solution
- Uses effect_share_factors for conversion between effects
4. Key insight from user: The solution already contains properly computed share values including all effect-to-effect conversions. The computation uses conversion
factors because derived effects (like Effect1 which shares 0.5 from costs) don't have direct {flow}->Effect1(temporal) variables - only the source effect shares exist
({flow}->costs(temporal)).
* Inline plotting methods to deprecate plotting.py * Fix test * Simplify Color Management * ColorType is now defined in color_processing.py and imported into statistics_accessor.py. * Fix ColorType typing * statistics_accessor.py - Heatmap colors type safety (lines 121-148, 820-853) - Changed _heatmap_figure() parameter type from colors: ColorType = None to colors: str | list[str] | None = None - Changed heatmap() method parameter type similarly - Updated docstrings to clarify that dicts are not supported for heatmaps since px.imshow's color_continuous_scale only accepts colorscale names or lists 2. statistics_accessor.py - Use configured qualitative colorscale (lines 284, 315) - Updated _create_stacked_bar() to use CONFIG.Plotting.default_qualitative_colorscale as the default colorscale - Updated _create_line() similarly - This ensures user-configured CONFIG.Plotting.default_qualitative_colorscale affects all bar/line plots consistently 3. topology_accessor.py - Path type alignment (lines 219-222) - Added normalization of path=False to None before calling _plot_network() - This resolves the type mismatch where TopologyAccessor.plot() accepts bool | str | Path but _plot_network() only accepts str | Path | None * fix usage if index name in aggregation plot
* Add planning doc
* Finalize planning
* Add plotting acessor
* Add plotting acessor
* Add tests
* Improve
* Improve
* Update docs
* Updated the plotting API so that .data always returns xarray DataArray or Dataset instead of pandas DataFrame.
* All .data now returns xr.Dataset consistently.
* Fixed Inconsistencies and Unused Parameters
* New Plot Accessors
results.plot.variable(pattern)
Plots the same variable type across multiple elements for easy comparison.
# All binary operation states across all components
results.plot.variable('on')
# All flow_rates, filtered to Boiler-related elements
results.plot.variable('flow_rate', include='Boiler')
# All storage charge states
results.plot.variable('charge_state')
# With aggregation
results.plot.variable('flow_rate', aggregate='sum')
Key features:
- Searches all elements for variables matching the pattern
- Filter with include/exclude on element names
- Supports aggregation, faceting, and animation
- Returns Dataset with element names as variables
results.plot.duration_curve(variables)
Plots load duration curves (sorted time series) showing utilization patterns.
# Single variable
results.plot.duration_curve('Boiler(Q_th)|flow_rate')
# Multiple variables
results.plot.duration_curve(['CHP|on', 'Boiler|on'])
# Normalized x-axis (0-100%)
results.plot.duration_curve('demand', normalize=True)
Key features:
- Sorts values from highest to lowest
- Shows how often each power level is reached
- normalize=True shows percentage of time on x-axis
- Returns Dataset with duration_hours or duration_pct dimension
* Fix duration curve
* Fix duration curve
* Fix duration curve
* Fix duration curve
* xr.apply_ufunc to sort along the time axis while preserving all other dimensions automatically
* ⏺ The example runs successfully. Now let me summarize the fixes:
Summary of Fixes
I addressed the actionable code review comments from CodeRabbitAI:
1. Documentation Issue - reshape parameter name ✓ (No fix needed)
The CodeRabbitAI comment was incorrect. The public API parameter in PlotAccessor.heatmap() is correctly named reshape (line 335). The reshape_time parameter exists in the lower-level heatmap_with_plotly function, but the documentation correctly shows the public API parameter.
2. Fixed simple_example.py (lines 129-130)
Problem: The example called balance() and duration_curve() without required arguments, which would cause TypeError at runtime.
Fix: Added the required arguments:
- optimization.results.plot.balance('Fernwärme') - specifying the bus to plot
- optimization.results.plot.duration_curve('Boiler(Q_th)|flow_rate') - specifying the variable to plot
3. Fixed variable collision in plot_accessors.py (lines 985-988)
Problem: When building the Dataset in the variable() method, using element names as keys could cause collisions if multiple variables from the same element matched the pattern (e.g., 'Boiler|flow_rate' and 'Boiler|flow_rate_max' would both map to 'Boiler', with the latter silently overwriting the former).
Fix: Changed to use the full variable names as keys instead of just element names:
ds = xr.Dataset({var_name: self._results.solution[var_name] for var_name in filtered_vars})
All tests pass (40 passed, 1 skipped) and the example runs successfully.
* make variable names public in results
* Fix sankey
* Fix effects()
* Fix effects
* Remove bargaps
* made faceting consistent across all plot methods:
| Method | facet_col | facet_row |
|------------------|-------------------------------------------|-----------------------------|
| balance() | 'scenario' | 'period' |
| heatmap() | 'scenario' | 'period' |
| storage() | 'scenario' | 'period' |
| flows() | 'scenario' | 'period' |
| effects() | 'scenario' | 'period' |
| variable() | 'scenario' | 'period' |
| duration_curve() | 'scenario' | 'period' (already had this) |
| compare() | N/A (uses its own mode='overlay'/'facet') | |
| sankey() | N/A (aggregates to single diagram) | |
Removed animate_by parameter from all methods
* Update storage method
* Remove mode parameter for simpli
| Method | Default mode |
|------------------|---------------------------------------------------|
| balance() | stacked_bar |
| storage() | stacked_bar (flows) + line (charge state overlay) |
| flows() | line |
| variable() | line |
| duration_curve() | line |
| effects() | bar |
* Make plotting_accessors.py more self contained
* Use faster to_long()
* Add 0-dim case
* sankey diagram now properly handles scenarios and periods:
Changes made:
1. Period aggregation with weights: Uses period_weights from flow_system to properly weight periods by their duration
2. Scenario aggregation with weights: Uses normalized scenario_weights to compute a weighted average across scenarios
3. Selection support: Users can filter specific scenarios/periods via select parameter before aggregation
Weighting logic:
- Periods (for aggregate='sum'): (da * period_weights).sum(dim='period') - this gives the total energy across all periods weighted by their duration
- Periods (for aggregate='mean'): (da * period_weights).sum(dim='period') / period_weights.sum() - weighted mean
- Scenarios: Always uses normalized weights (sum to 1) for weighted averaging, since scenarios represent probability-weighted alternatives
* Add colors to sankey
* Add sizes
* Add size filtering
* Include storage sizes
* Remove storage sizes
* Add charge state and status accessor
* Summary of Changes
1. Added new methods to PlotAccessor (plot_accessors.py)
charge_states() (line 658):
- Returns a Dataset with each storage's charge state as a variable
- Supports filtering with include/exclude parameters
- Default plot: line chart
on_states() (line 753):
- Returns a Dataset with each component's |status variable
- Supports filtering with include/exclude parameters
- Default plot: heatmap (good for binary data visualization)
2. Added data building helper functions (plot_accessors.py)
build_flow_rates(results) (line 315):
- Builds a DataArray containing flow rates for all flows
- Used internally by PlotAccessor methods
build_flow_hours(results) (line 333):
- Builds a DataArray containing flow hours for all flows
build_sizes(results) (line 347):
- Builds a DataArray containing sizes for all flows
_filter_dataarray_by_coord(da, **kwargs) (line 284):
- Helper for filtering DataArrays by coordinate values
3. Deprecated old Results methods (results.py)
The following methods now emit DeprecationWarning:
- results.flow_rates() → Use results.plot.flows(plot=False).data
- results.flow_hours() → Use results.plot.flows(unit='flow_hours', plot=False).data
- results.sizes() → Use results.plot.sizes(plot=False).data
4. Updated PlotAccessor methods to use new helpers
- flows() now uses build_flow_rates() / build_flow_hours() directly
- sizes() now uses build_sizes() directly
- sankey() now uses build_flow_hours() directly
This ensures the deprecation warnings only fire when users directly call the old methods, not when using the plot accessor
* 1. New methods added to PlotAccessor
- charge_states(): Returns Dataset with all storage charge states
- on_states(): Returns Dataset with all component status variables (heatmap display)
2. Data building helper functions (plot_accessors.py)
- build_flow_rates(results): Builds DataArray of flow rates
- build_flow_hours(results): Builds DataArray of flow hours
- build_sizes(results): Builds DataArray of sizes
- _filter_dataarray_by_coord(da, **kwargs): Filter helper
- _assign_flow_coords(da, results): Add flow coordinates
3. Caching in PlotAccessor
Added lazy-cached properties for expensive computations:
- _all_flow_rates - cached DataArray of all flow rates
- _all_flow_hours - cached DataArray of all flow hours
- _all_sizes - cached DataArray of all sizes
- _all_charge_states - cached Dataset of all storage charge states
- _all_status_vars - cached Dataset of all status variables
4. Deprecated methods in Results class
Added deprecation warnings to:
- results.flow_rates() → Use results.plot.flows(plot=False).data
- results.flow_hours() → Use results.plot.flows(unit='flow_hours', plot=False).data
- results.sizes() → Use results.plot.sizes(plot=False).data
5. Updated PlotAccessor methods to use cached properties
- flows() uses _all_flow_rates / _all_flow_hours
- sankey() uses _all_flow_hours
- sizes() uses _all_sizes
- charge_states() uses _all_charge_states
- on_states() uses _all_status_vars
* Move deprectated functionality into results.py instead of porting to the new module
* Revert to simply deprectae old methods without forwarding to new code
* Remove planning file
* Update plotting methods for new datasets
* 1. Renamed data properties in PlotAccessor to use all_ prefix:
- all_flow_rates - All flow rates as Dataset
- all_flow_hours - All flow hours as Dataset
- all_sizes - All flow sizes as Dataset
- all_charge_states - All storage charge states as Dataset
- all_on_states - All component on/off status as Dataset
2. Updated internal references - All usages in flows(), sankey(), sizes(), charge_states(), and on_states() methods now use the new names.
3. Updated deprecation messages in results.py to point to the new API:
- results.flow_rates() → results.plot.all_flow_rates
- results.flow_hours() → results.plot.all_flow_hours
- results.sizes() → results.plot.all_sizes
4. Updated docstring examples in PlotAccessor to use the new all_* names.
* Update deprecations messages
* Update deprecations messages
* Thsi seems much better.
* Updaet docstrings and variable name generation in plotting acessor
* Change __ to _ in private dataset caching
* Revert breaking io changes
* New solution storing interface
* Add new focused statistics and plot accessors
* Renamed all properties:
- all_flow_rates → flow_rates
- all_flow_hours → flow_hours
- all_sizes → sizes
- all_charge_states → charge_states
* Cache Statistics
* Invalidate caches
* Add effect related statistics
* Simplify statistics accessor to rely on flow_system directly instead of solution attrs
* Fix heatma fallback for 1D Data
* Add topology accessor
* All deprecation warnings in the codebase now consistently use the format will be removed in v{DEPRECATION_REMOVAL_VERSION}.
* Update tests
* created comprehensive documentation for all FlowSystem accessors
* Update results documentation
* Update results documentation
* Update effect statistics
* Update effect statistics
* Update effect statistics
* Add mkdocs plotly plugin
* Add section about custom constraints
* documentation updates:
docs/user-guide/results/index.md:
- Updated table to replace effects_per_component with temporal_effects, periodic_effects, total_effects, and effect_share_factors
- Fixed flow_hours['Boiler(Q_th)|flow_rate'] → flow_hours['Boiler(Q_th)']
- Fixed sizes['Boiler(Q_th)|size'] → sizes['Boiler(Q_th)']
- Replaced effects_per_component example with new effect properties and groupby examples
- Updated complete example to use total_effects
docs/user-guide/results-plotting.md:
- Fixed colors example from 'Boiler(Q_th)|flow_rate' → 'Boiler(Q_th)'
- Fixed duration_curve examples to use clean labels
docs/user-guide/migration-guide-v6.md:
- Added new "Statistics Accessor" section explaining the clean labels and new effect properties
* implemented the effects() method in StatisticsPlotAccessor at flixopt/statistics_accessor.py:1132-1258.
Summary of what was done:
1. Implemented effects() method in StatisticsPlotAccessor class that was missing but documented
- Takes aspect parameter: 'total', 'temporal', or 'periodic'
- Takes effect parameter to filter to a specific effect (e.g., 'costs', 'CO2')
- Takes by parameter: 'component' or 'time' for grouping
- Supports all standard plotting parameters: select, colors, facet_col, facet_row, show
- Returns PlotResult with both data and figure
2. Verified the implementation works with all parameter combinations:
- Default call: flow_system.statistics.plot.effects()
- Specific effect: flow_system.statistics.plot.effects(effect='costs')
- Temporal aspect: flow_system.statistics.plot.effects(aspect='temporal')
- Temporal by time: flow_system.statistics.plot.effects(aspect='temporal', by='time')
- Periodic aspect: flow_system.statistics.plot.effects(aspect='periodic')
* Remove intermediate plot accessor
* 1. pyproject.toml: Removed duplicate mkdocs-plotly-plugin>=0.1.3 entry (kept the exact pin ==0.1.3)
2. flixopt/plotting.py: Fixed dimension name consistency by using squeezed_data.name instead of data.name in the fallback heatmap logic
3. flixopt/statistics_accessor.py:
- Fixed _dataset_to_long_df() to only use coordinates that are actually present as columns after reset_index()
- Fixed the nested loop inefficiency with include_flows by pre-computing the flows list outside the loop
- (Previously fixed) Fixed asymmetric NaN handling in validation check
* _create_effects_dataset method in statistics_accessor.py was simplified:
1. Detect contributors from solution data variables instead of assuming they're only flows
- Uses regex pattern to find {contributor}->{effect}(temporal|periodic) variables
- Contributors can be flows OR components (e.g., components with effects_per_active_hour)
2. Exclude effect-to-effect shares
- Filters out contributors whose base name matches any effect label
- For example, costs(temporal) is excluded because costs is an effect label
- These intermediate shares are already included in the computation
3. Removed the unused _compute_effect_total method
- The new simplified implementation directly looks up shares from the solution
- Uses effect_share_factors for conversion between effects
4. Key insight from user: The solution already contains properly computed share values including all effect-to-effect conversions. The computation uses conversion
factors because derived effects (like Effect1 which shares 0.5 from costs) don't have direct {flow}->Effect1(temporal) variables - only the source effect shares exist
({flow}->costs(temporal)).
* Update docs
* Improve to_netcdf method
* Update examples
* Fix IIS computaion flag
* Fix examples
* Fix faceting in heatmap and use period as facet col everywhere
* Inline plotting methods to deprecate plotting.py (#508)
* Inline plotting methods to deprecate plotting.py
* Fix test
* Simplify Color Management
* ColorType is now defined in color_processing.py and imported into statistics_accessor.py.
* Fix ColorType typing
* statistics_accessor.py - Heatmap colors type safety (lines 121-148, 820-853)
- Changed _heatmap_figure() parameter type from colors: ColorType = None to colors: str | list[str] | None = None
- Changed heatmap() method parameter type similarly
- Updated docstrings to clarify that dicts are not supported for heatmaps since px.imshow's color_continuous_scale only accepts colorscale names or lists
2. statistics_accessor.py - Use configured qualitative colorscale (lines 284, 315)
- Updated _create_stacked_bar() to use CONFIG.Plotting.default_qualitative_colorscale as the default colorscale
- Updated _create_line() similarly
- This ensures user-configured CONFIG.Plotting.default_qualitative_colorscale affects all bar/line plots consistently
3. topology_accessor.py - Path type alignment (lines 219-222)
- Added normalization of path=False to None before calling _plot_network()
- This resolves the type mismatch where TopologyAccessor.plot() accepts bool | str | Path but _plot_network() only accepts str | Path | None
* fix usage if index name in aggregation plot
Description
Brief description of the changes in this PR.
Type of Change
Related Issues
Closes #(issue number)
Testing
Checklist
Summary by CodeRabbit
Release Notes
New Features
Deprecations
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.