Skip to content

Conversation

@FBumann
Copy link
Member

@FBumann FBumann commented Oct 24, 2025

Description

Centralize logic for yaml and json io

Type of Change

  • Bug fix
  • New feature
  • Documentation update
  • Code refactoring

Related Issues

Closes #(issue number)

Testing

  • I have tested my changes
  • Existing tests still pass

Checklist

  • My code follows the project style
  • I have updated documentation if needed
  • I have added tests for new functionality (if applicable)

Summary by CodeRabbit

  • Refactor

    • Centralized JSON/YAML I/O for consistent loading and saving across the app
    • Configuration loader now auto-detects file format for more flexible config loading
    • Improved YAML output with richer multi-line and formatting support for saved documents
    • Numeric outputs (nested floats) are now rounded consistently before display/storage for clearer, stable exports
  • Bug Fix / I/O

    • NetCDF read/write now uses a single, consistent engine and enforces proper extension/compression handling for more reliable dataset persistence

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 24, 2025

Walkthrough

Centralizes JSON/YAML I/O in a new flixopt/io.py, moves nested-float rounding into that module, removes flixopt/utils.py, updates config.py, results.py, structure.py, and calculation.py to use the new io helpers, and swaps NetCDF backend and dependency to netcdf4.

Changes

Cohort / File(s) Summary
New I/O & rounding utilities
flixopt/io.py
Adds load_json, save_json, load_yaml, save_yaml, load_config_file, _save_yaml_multiline, and round_nested_floats; implements format autodetection, improved YAML formatting, numpy/xarray-aware rounding, and centralized dataset netCDF read/write using netcdf4.
Config loader
flixopt/config.py
Removes top-level yaml import; CONFIG.load_from_file() now performs a lazy/local call to fx_io.load_yaml(config_path) to obtain and apply the config dictionary (no safe=True argument).
Results I/O updates
flixopt/results.py
Replaces direct json/yaml file reads/writes with fx_io.load_config_file(), fx_io.load_yaml()/fx_io.load_json() and fx_io.save_yaml()/fx_io.save_json(); Linopy model saving now uses engine='netcdf4'.
Structure JSON output
flixopt/structure.py
Replaces manual open()/json.dump() with fx_io.save_json() in to_json() and removes unused json import.
Calculation rounding usage
flixopt/calculation.py
Replaces calls to utils.round_nested_floats with fx_io.round_nested_floats for final-results rounding and logging.
Utilities removed
flixopt/utils.py
Deletes the module and its functions (round_nested_floats, convert_dataarray); logic migrated into flixopt/io.py.
NetCDF dependency & changelog
pyproject.toml, CHANGELOG.md
Changes dependency from h5netcdf to netcdf4 in pyproject.toml; adds a changelog note about reverting to the netcdf4 engine.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Caller as Caller (CONFIG / RESULTS / STRUCT / CALC)
  participant fx_io as flixopt.io
  participant FS as Filesystem

  Caller->>fx_io: load_config_file(path) / load_yaml(path) / load_json(path)
  fx_io->>FS: open/read file
  FS-->>fx_io: raw text/bytes
  fx_io->>fx_io: parse (json or yaml) / validate
  fx_io-->>Caller: parsed dict/list

  Caller->>fx_io: save_yaml(data,path) / save_json(data,path)
  fx_io->>fx_io: round_nested_floats(...) (if applied)
  fx_io->>fx_io: format YAML (multiline handling)
  fx_io->>FS: write file
  FS-->>fx_io: write complete
  fx_io-->>Caller: success / raise error
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

  • Review flixopt/io.py in detail: YAML representers, _save_yaml_multiline, round_nested_floats behavior for numpy/xarray, and netCDF engine usage.
  • Verify all call sites migrated from utils/direct json/yaml to fx_io and ensure no remaining references to removed utils.py.
  • Check CONFIG.load_from_file() lazy import/error handling and ensure config semantics unchanged.
  • Confirm pyproject.toml dependency change is consistent with packaging/testing.

Possibly related PRs

Suggested labels

v3.0.0

Poem

🐰
I hop through files and tidy the streams,
I fold YAML lines and polish the beams,
Floats get rounded, docs shine multiline,
JSON and YAML now share one fine vine,
A rabbit's small patch — neat, bright, and clean.

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description Check ⚠️ Warning The pull request description follows the template structure but is incomplete in several critical areas. While a brief description is provided ("Centralize logic for yaml and json io"), the Type of Change section has no checkboxes selected despite the PR introducing new I/O functions, the Related Issues field still contains the placeholder text "Closes #(issue number)" without a specific issue reference, and the Testing section shows both checkboxes unchecked, indicating no verification of testing. The description lacks essential information needed to understand the scope and validation of these changes.
Docstring Coverage ⚠️ Warning Docstring coverage is 79.17% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Title Check ✅ Passed The title "Feature/centralized io" clearly and concisely describes the primary change in this pull request—centralizing JSON and YAML I/O logic across the codebase. The summary changes demonstrate that new centralized I/O functions were added to flixopt/io.py, and multiple modules (config.py, results.py, structure.py, calculation.py) were refactored to use these new utilities instead of direct I/O operations. The title accurately reflects this main objective and is specific enough for developers scanning commit history to understand the changeset's intent.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feature/centralized-io

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@FBumann
Copy link
Member Author

FBumann commented Oct 24, 2025

@coderabbitai review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 24, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ae56e6c and 9467198.

📒 Files selected for processing (4)
  • flixopt/config.py (1 hunks)
  • flixopt/io.py (1 hunks)
  • flixopt/results.py (5 hunks)
  • flixopt/structure.py (1 hunks)
🔇 Additional comments (14)
flixopt/structure.py (1)

790-790: LGTM! Clean delegation to centralized I/O.

The refactoring correctly replaces direct JSON writing with the centralized utility while preserving error handling semantics.

flixopt/config.py (1)

301-309: LGTM! Lazy import pattern correctly avoids circular dependencies.

The refactoring uses a lazy local import of fx_io and delegates YAML loading to the centralized utility with safe=True, maintaining security.

flixopt/results.py (3)

47-47: LGTM! Uses the config file autodetector.

Clean delegation to the centralized config loader with automatic JSON/YAML detection.


1059-1059: LGTM! Clean delegation to centralized save functions.

Both YAML and JSON save operations correctly use the centralized utilities with appropriate defaults.

Also applies to: 2294-2294


2050-2050: LGTM! Uses centralized JSON loading.

Clean refactoring to the centralized load_json function.

flixopt/io.py (9)

42-58: LGTM! Simple and robust JSON loader.

Clean implementation with proper error handling and encoding.


61-80: LGTM! Consistent JSON formatting.

Good default parameters for readable JSON output (indent=4, ensure_ascii=False).


107-139: LGTM! Consistent YAML formatting with sensible defaults.

Good defaults: readable indentation, wide line width, Unicode support, and preserves dict order.


142-199: LGTM! Robust config file autodetector with fallback logic.

The extension-based detection with fallback is well-designed and handles common scenarios. Good logging on fallback attempts.


201-305: LGTM! Sophisticated YAML formatting for complex content.

The custom string representation and normalization logic handles edge cases like multiline strings, special characters, and various newline formats appropriately.


308-359: LGTM! Comprehensive model documentation.

Good implementation for extracting linopy model information. The infeasibility detection with proper exception handling is helpful for debugging.


362-412: LGTM! NetCDF save with JSON-encoded attrs.

Clever approach to serialize complex attrs as JSON strings for round-trip compatibility. Good compression handling with dependency checking.


415-441: LGTM! Consistent NetCDF loading with attr restoration.

Properly restores attrs from JSON strings for datasets, data variables, and coordinates.


444-496: LGTM! Well-structured path management.

The dataclass with __post_init__ and update() provides a clean interface for managing related output paths.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
flixopt/io.py (2)

109-141: Prefer explicit safe dump and stable emitter options.

Be explicit about the dumper to avoid regressions across PyYAML versions and keep output safe and predictable.

Apply:

-    with open(path, 'w', encoding='utf-8') as f:
-        yaml.dump(
+    with open(path, 'w', encoding='utf-8') as f:
+        yaml.safe_dump(
             data,
             f,
             indent=indent,
             width=width,
             allow_unicode=allow_unicode,
             sort_keys=sort_keys,
             default_flow_style=False,
             **kwargs,
         )

144-201: Config autodetect: small hardening and semantics clean‑up.

Two tweaks:

  • Normalize extension with .suffix.lower() for robustness.
  • Consider harmonizing empty-file handling: currently an empty .json will fall back and return {} via YAML, which is surprising.

Apply:

-    # Try based on file extension
-    if path.suffix == '.json':
+    # Try based on file extension (case-insensitive)
+    suffix = path.suffix.lower()
+    if suffix == '.json':
@@
-    elif path.suffix in ['.yaml', '.yml']:
+    elif suffix in ('.yaml', '.yml'):
@@
-    else:
+    else:
         # Unknown extension, try YAML first (more common for config)
         try:
             return load_yaml(path, safe=True)
         except yaml.YAMLError:
             try:
                 return load_json(path)
             except json.JSONDecodeError as e:
                 raise ValueError(f'Failed to parse {path} as YAML or JSON') from e

Optionally, reject empty .json instead of silently accepting via YAML fallback:

-        except json.JSONDecodeError:
+        except json.JSONDecodeError as err:
+            # Treat empty JSON files explicitly to avoid silent YAML fallback
+            if path.read_text(encoding='utf-8').strip() == '':
+                raise ValueError(f'Empty JSON file: {path}') from err
             logger.warning(f'Failed to parse {path} as JSON, trying YAML')
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9467198 and 56d4139.

📒 Files selected for processing (2)
  • flixopt/io.py (1 hunks)
  • flixopt/results.py (5 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
flixopt/results.py (1)
flixopt/io.py (5)
  • load_config_file (144-200)
  • load_yaml (83-106)
  • save_yaml (109-141)
  • load_json (42-58)
  • save_json (61-80)
🔇 Additional comments (7)
flixopt/io.py (2)

42-59: JSON loader: looks good.

Simple, typed, UTF‑8 by default. No issues.


61-81: JSON saver: looks good.

Deterministic formatting, UTF‑8, extensible via kwargs. No issues.

flixopt/results.py (5)

47-47: Good consolidation to centralized config loader.

Keeps suffix logic in one place. Clean.


172-172: Switched to safe YAML loading.

Now uses safe=True through fx_io; resolves prior security concern.


1059-1059: Saving summary via fx_io.save_yaml: OK.

Consistent with centralized I/O and read path.


2050-2050: Segment meta JSON loading via fx_io: OK.

Delegation simplifies maintenance.


2294-2294: Segment meta JSON saving via fx_io: OK.

Symmetric with load; deterministic formatting.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
flixopt/io.py (1)

406-456: Fix engine detection and only pass engine when installed; correct module names.

find_spec('netcdf4') fails (module is 'netCDF4'); also engine is passed even when missing.

 def save_dataset_to_netcdf(
@@
-    apply_encoding = False
-    if compression != 0:
-        if importlib.util.find_spec(engine) is not None:
-            apply_encoding = True
-        else:
-            logger.warning(
-                f'Dataset was exported without compression due to missing dependency "{engine}".'
-                f'Install {engine} via `pip install {engine}`.'
-            )
+    # Map xarray engine -> importable module name
+    _ENGINE_MODULE = {'netcdf4': 'netCDF4', 'h5netcdf': 'h5netcdf', 'scipy': 'scipy'}
+    mod_name = _ENGINE_MODULE.get(engine, engine)
+    has_engine = importlib.util.find_spec(mod_name) is not None
+
+    apply_encoding = compression != 0 and has_engine
+    if compression != 0 and not has_engine:
+        logger.warning(
+            f'Dataset was exported without compression due to missing dependency "{engine}". '
+            f'Install {mod_name} via `pip install {mod_name}`.'
+        )
@@
-    ds.to_netcdf(
-        path,
-        encoding=None
-        if not apply_encoding
-        else {data_var: {'zlib': True, 'complevel': compression} for data_var in ds.data_vars},
-        engine=engine,
-    )
+    _kwargs = {
+        'encoding': None
+        if not apply_encoding
+        else {data_var: {'zlib': True, 'complevel': compression} for data_var in ds.data_vars},
+    }
+    if has_engine:
+        _kwargs['engine'] = engine
+    ds.to_netcdf(path, **_kwargs)
flixopt/calculation.py (1)

25-25: Broken import: utils module does not exist; remove unused import statement.

The import at line 25 references a nonexistent module (utils.py is absent from the repository) and is unused within the file. This will raise an ImportError when the module is loaded. Delete the import statement.

-from . import utils as utils
♻️ Duplicate comments (1)
flixopt/io.py (1)

125-149: Remove unsafe YAML path from public API (FullLoader).

Keep public loader always safe; provide a private escape hatch if truly required. This matches prior review.

-def load_yaml(path: str | pathlib.Path, safe: bool = True) -> dict | list:
+def load_yaml(path: str | pathlib.Path) -> dict | list:
@@
-    Args:
-        path: Path to the YAML file.
-        safe: If True, use safe_load for security (default: True).
-            If False, use FullLoader (allows arbitrary Python objects - SECURITY RISK).
-            Only use safe=False for trusted, internally-generated files.
+    Args:
+        path: Path to the YAML file.
@@
-    with open(path, encoding='utf-8') as f:
-        if safe:
-            return yaml.safe_load(f) or {}
-        else:
-            return yaml.load(f, Loader=yaml.FullLoader) or {}
+    with open(path, encoding='utf-8') as f:
+        return yaml.safe_load(f) or {}
+
+# Internal-only escape hatch (not exported)
+def _load_yaml_unsafe(path: str | pathlib.Path) -> dict | list:
+    """
+    INTERNAL: Load YAML allowing arbitrary tags. Do not use on untrusted input.
+    """
+    path = pathlib.Path(path)
+    with open(path, encoding='utf-8') as f:
+        return yaml.unsafe_load(f) or {}
🧹 Nitpick comments (5)
flixopt/io.py (4)

151-183: Use yaml.safe_dump for consistency and safer defaults.

Data is already converted to primitives; safe_dump is appropriate.

-    with open(path, 'w', encoding='utf-8') as f:
-        yaml.dump(
+    with open(path, 'w', encoding='utf-8') as f:
+        yaml.safe_dump(
             data,
             f,
             indent=indent,
             width=width,
             allow_unicode=allow_unicode,
             sort_keys=sort_keys,
             default_flow_style=False,
             **kwargs,
         )

245-293: Avoid global representer mutation; bind to local CustomDumper.

Register the string representer on the custom dumper, not globally on SafeDumper.

-    # Add the string representer to SafeDumper
-    yaml.add_representer(str, represent_str, Dumper=yaml.SafeDumper)
-
-    # Configure dumper options for better formatting
+    # Configure dumper options for better formatting
     class CustomDumper(yaml.SafeDumper):
         def increase_indent(self, flow=False, indentless=False):
             return super().increase_indent(flow, False)
 
+    # Bind representer locally to avoid global side effects
+    CustomDumper.add_representer(str, represent_str)
@@
-        yaml.dump(
+        yaml.dump(
             processed_data,
             file,
             Dumper=CustomDumper,
             sort_keys=False,  # Preserve dictionary order
             default_flow_style=False,  # Use block style for mappings
             width=1000,  # Set a reasonable line width
             allow_unicode=True,  # Support Unicode characters
             indent=2,  # Set consistent indentation
         )

459-485: Add engine fallbacks when loading NetCDF for portability.

Try common engines in order; use the first available.

-def load_dataset_from_netcdf(path: str | pathlib.Path) -> xr.Dataset:
+def load_dataset_from_netcdf(path: str | pathlib.Path) -> xr.Dataset:
@@
-    ds = xr.load_dataset(str(path), engine='h5netcdf')
+    last_err = None
+    for eng in ('h5netcdf', 'netcdf4', 'scipy'):
+        try:
+            return xr.load_dataset(str(path), engine=eng)
+        except Exception as e:
+            last_err = e
+            continue
+    # If all engines fail, raise the last error
+    raise last_err
@@
-    return ds
+    # Unreachable now; kept to show post-processing remains unchanged

186-243: Normalize extension handling to be case-insensitive.

Lowercase the suffix once to avoid surprises with .JSON/.YML, then reuse.

-    # Try based on file extension
-    if path.suffix == '.json':
+    # Try based on file extension
+    suffix = path.suffix.lower()
+    if suffix == '.json':
@@
-    elif path.suffix in ['.yaml', '.yml']:
+    elif suffix in ['.yaml', '.yml']:
flixopt/calculation.py (1)

255-261: Avoid double rounding; main_results is already rounded.

self.main_results has been processed by fx_io.round_nested_floats; re-wrapping is redundant.

-                + yaml.dump(
-                    fx_io.round_nested_floats(self.main_results),
+                + yaml.dump(
+                    self.main_results,
                     default_flow_style=False,
                     sort_keys=False,
                     allow_unicode=True,
                     indent=4,
                 )
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 56d4139 and f150763.

📒 Files selected for processing (3)
  • flixopt/calculation.py (2 hunks)
  • flixopt/io.py (3 hunks)
  • flixopt/utils.py (0 hunks)
💤 Files with no reviewable changes (1)
  • flixopt/utils.py
🧰 Additional context used
🧬 Code graph analysis (1)
flixopt/calculation.py (1)
flixopt/io.py (1)
  • round_nested_floats (38-76)

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
flixopt/calculation.py (1)

254-260: Remove redundant rounding call.

The main_results property (line 146) already applies round_nested_floats, so calling it again here is unnecessary and wastes computation.

Apply this diff:

         if log_main_results:
             logger.info(
                 f'{" Main Results ":#^80}\n'
                 + yaml.dump(
-                    fx_io.round_nested_floats(self.main_results),
+                    self.main_results,
                     default_flow_style=False,
                     sort_keys=False,
                     allow_unicode=True,
                     indent=4,
                 )
             )
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f150763 and 382ff84.

📒 Files selected for processing (1)
  • flixopt/calculation.py (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
flixopt/calculation.py (1)
flixopt/io.py (1)
  • round_nested_floats (38-76)

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 382ff84 and c5a6b1b.

📒 Files selected for processing (6)
  • CHANGELOG.md (1 hunks)
  • flixopt/calculation.py (2 hunks)
  • flixopt/config.py (1 hunks)
  • flixopt/io.py (7 hunks)
  • flixopt/results.py (5 hunks)
  • pyproject.toml (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • flixopt/config.py
  • flixopt/results.py
  • flixopt/calculation.py
🔇 Additional comments (9)
flixopt/io.py (8)

1-18: LGTM!

The import structure is clean and well-organized. Good use of TYPE_CHECKING guard for linopy to avoid circular dependencies.


37-81: LGTM!

Excellent implementation of nested float rounding with comprehensive NumPy type handling. The check order is correct (NumPy scalars before native types), and the conversions safely produce native Python types for JSON/YAML serialization.


84-127: LGTM!

Clean JSON I/O wrappers with sensible defaults. Good choice of ensure_ascii=False to support Unicode characters by default.


130-200: LGTM!

The YAML I/O functions are well-designed with security in mind. The unsafe loader is properly isolated as an internal function with clear warnings, while the public API always uses safe loading.


203-262: LGTM!

Robust configuration file loader with intelligent format detection and fallback logic. The case-insensitive extension matching and exception chaining are good practices.


298-312: LGTM!

Excellent fix binding the representer locally to CustomDumper instead of globally. This prevents side effects on other YAML operations. The consistent indentation of 4 spaces also improves readability.


372-423: LGTM!

Good integration with the improved _save_yaml_multiline function. The infeasibility detection with proper error handling is a nice debugging feature.


426-494: LGTM!

The NetCDF I/O functions now consistently use the netcdf4 engine, aligning with the dependency change in pyproject.toml. The attribute serialization via JSON ensures proper preservation of metadata.

pyproject.toml (1)

40-40: I need to verify whether version 1.6.1 specifically exists, since the output only shows the latest versions (1.7.x range). Let me search for this explicitly.

No issues found with netcdf4 version constraint.

Version 1.6.1 was released on 09/18/2022 and is a stable, documented release. The security advisory check returned no vulnerabilities for netcdf4. The version constraint >= 1.6.1, < 2 is valid and permits compatibility with current stable releases (1.6.x and 1.7.x series).

CHANGELOG.md Outdated
- **Color terminology**: Standardized terminology from "colormap" to "colorscale" throughout the codebase for consistency with Plotly conventions
- **Default colorscales**: Changed default sequential colorscale from 'viridis' to 'turbo' for better perceptual uniformity; qualitative colorscale now defaults to 'plotly'
- **Aggregation plotting**: `Aggregation.plot()` now respects `CONFIG.Plotting.default_qualitative_colorscale` and uses `process_colors()` for consistent color handling
- **netcdf engine**: Following the xarray revert in `xarray==2025.09.02` and after running some benchmarks, we go back to using the netcdf4 engine
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Verify the xarray version reference.

The version xarray==2025.09.02 appears unusual for xarray's CalVer format, which typically follows patterns like 2024.2.0 or 2024.10.0. Please verify this is the correct version reference.


🏁 Script executed:

#!/bin/bash
# Check xarray versions to verify if 2025.09.02 exists

curl -s https://pypi.org/pypi/xarray/json | jq -r '.releases | keys[]' | grep "2025" | sort -V

Length of output: 136


Correct the xarray version reference: xarray==2025.09.02 does not exist.

The version 2025.09.02 is not a valid xarray release. Available versions follow the CalVer format YYYY.MINOR.PATCH (e.g., 2025.9.0, 2025.9.1). Update to either 2025.9.0 or 2025.9.1.

🤖 Prompt for AI Agents
In CHANGELOG.md around line 83, the xarray version reference
`xarray==2025.09.02` is incorrect; replace it with a valid CalVer release such
as `xarray==2025.9.0` or `xarray==2025.9.1` (recommend using `2025.9.1`), keep
the backticks and sentence structure intact, and update the line to read the
same message but with the corrected version string.

@FBumann FBumann merged commit 2b72783 into main Oct 26, 2025
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants