-
Notifications
You must be signed in to change notification settings - Fork 9
Feature/402 feature silent framework #407
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…bar, add method to suppress_output() and add tqdm to dependencies
WalkthroughAdd CONFIG.Solving and convenience runtime modes; introduce an FD-level suppress_output() context manager; update solvers to use CONFIG.Solving defaults; SegmentedCalculation can suppress per-segment solver output and show a tqdm progress bar; examples, tests, CI, and dependencies updated. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant User
participant SegCalc as SegmentedCalculation
participant Progress as tqdm
participant IO as suppress_output()
participant Solver
User->>SegCalc: do_modeling_and_solve(solver, show_individual_solves=False)
SegCalc->>Progress: create progress bar (n segments)
loop per segment
SegCalc->>Progress: set description (segment window)
SegCalc->>IO: enter suppress_output()
note right of IO #f0f4c3: FD-level stdout/stderr redirected
SegCalc->>Solver: solver.solve()
Solver-->>SegCalc: result (no console output)
IO->>IO: restore stdout/stderr
SegCalc->>Progress: advance
end
SegCalc->>Progress: close()
SegCalc-->>User: return aggregated results and durations
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes
Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
🔇 Additional comments (2)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
flixopt/calculation.py (1)
583-624: Ensure progress bar cleanup on exceptions.The progress bar integration looks good and effectively provides user feedback while suppressing per-segment solver output. However, if an exception occurs during segment solving (lines 591-623), the progress bar won't be closed (line 624), potentially leaving the terminal in a corrupted state.
Wrap the progress bar usage in a try-finally block to ensure cleanup:
- # Create tqdm progress bar with custom format that prints to stdout - progress_bar = tqdm( - enumerate(self.sub_calculations), - total=len(self.sub_calculations), - desc='Solving segments', - unit='segment', - file=sys.stdout, # Force tqdm to write to stdout instead of stderr - ) - - for i, calculation in progress_bar: - # Update progress bar description with current segment info - progress_bar.set_description( - f'Solving ({calculation.flow_system.timesteps[0]} -> {calculation.flow_system.timesteps[-1]})' - ) - - if i > 0 and self.nr_of_previous_values > 0: - self._transfer_start_values(i) - - calculation.do_modeling() - - # Warn about Investments, but only in fist run - if i == 0: - invest_elements = [ - model.label_full - for component in calculation.flow_system.components.values() - for model in component.submodel.all_submodels - if isinstance(model, InvestmentModel) - ] - if invest_elements: - logger.critical( - f'Investments are not supported in Segmented Calculation! ' - f'Following InvestmentModels were found: {invest_elements}' - ) - - # Redirect solver stdout to null to avoid cluttering the output - with utils.suppress_output(): - calculation.solve( - solver, - log_file=pathlib.Path(log_file) if log_file is not None else self.folder / f'{self.name}.log', - log_main_results=log_main_results, - ) - - progress_bar.close() + # Create tqdm progress bar with custom format that prints to stdout + progress_bar = tqdm( + enumerate(self.sub_calculations), + total=len(self.sub_calculations), + desc='Solving segments', + unit='segment', + file=sys.stdout, # Force tqdm to write to stdout instead of stderr + ) + + try: + for i, calculation in progress_bar: + # Update progress bar description with current segment info + progress_bar.set_description( + f'Solving ({calculation.flow_system.timesteps[0]} -> {calculation.flow_system.timesteps[-1]})' + ) + + if i > 0 and self.nr_of_previous_values > 0: + self._transfer_start_values(i) + + calculation.do_modeling() + + # Warn about Investments, but only in fist run + if i == 0: + invest_elements = [ + model.label_full + for component in calculation.flow_system.components.values() + for model in component.submodel.all_submodels + if isinstance(model, InvestmentModel) + ] + if invest_elements: + logger.critical( + f'Investments are not supported in Segmented Calculation! ' + f'Following InvestmentModels were found: {invest_elements}' + ) + + # Redirect solver stdout to null to avoid cluttering the output + with utils.suppress_output(): + calculation.solve( + solver, + log_file=pathlib.Path(log_file) if log_file is not None else self.folder / f'{self.name}.log', + log_main_results=log_main_results, + ) + finally: + progress_bar.close()
🧹 Nitpick comments (1)
flixopt/utils.py (1)
92-112: Document thread-safety limitation and consider error handling.The implementation correctly handles FD redirection and cleanup. However, there are two considerations:
Thread Safety: This context manager modifies global file descriptors (1 and 2), making it unsafe for concurrent use. If multiple threads call
suppress_output()simultaneously, FD state will be corrupted.Error Handling: The
os.dup()andos.dup2()calls can raiseOSErrorif file descriptors are exhausted or invalid. While rare, this could leave FDs in an inconsistent state.Consider adding a docstring note about thread-safety:
@contextmanager def suppress_output(): - """Redirect both Python and C-level stdout/stderr to os.devnull.""" + """Redirect both Python and C-level stdout/stderr to os.devnull. + + Warning: + This context manager is not thread-safe as it modifies global file descriptors. + Only use in single-threaded contexts or with external synchronization. + """Optionally, add defensive error handling:
@contextmanager def suppress_output(): """Redirect both Python and C-level stdout/stderr to os.devnull. Warning: This context manager is not thread-safe as it modifies global file descriptors. Only use in single-threaded contexts or with external synchronization. """ with open(os.devnull, 'w') as devnull: old_stdout_fd = None old_stderr_fd = None try: # Save original file descriptors old_stdout_fd = os.dup(1) old_stderr_fd = os.dup(2) # Flush any pending text sys.stdout.flush() sys.stderr.flush() # Redirect low-level fds to devnull os.dup2(devnull.fileno(), 1) os.dup2(devnull.fileno(), 2) yield finally: # Restore fds if old_stdout_fd is not None: os.dup2(old_stdout_fd, 1) os.close(old_stdout_fd) if old_stderr_fd is not None: os.dup2(old_stderr_fd, 2) os.close(old_stderr_fd)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
flixopt/calculation.py(4 hunks)flixopt/utils.py(2 hunks)pyproject.toml(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: test (3.13)
- GitHub Check: test (3.11)
- GitHub Check: test (3.10)
- GitHub Check: test (3.12)
🔇 Additional comments (3)
pyproject.toml (1)
44-44: LGTM!The tqdm dependency is appropriately added with a reasonable version constraint that aligns with the project's dependency management patterns.
flixopt/calculation.py (2)
241-242: LGTM!The added logging provides useful visibility into the solver execution and status, which aligns well with the PR's goal of improving usability while providing appropriate feedback.
630-630: LGTM!The logging statement provides useful feedback about the total solving time, consistent with the logging improvements in
FullCalculation.solve().
…ilent-framework # Conflicts: # flixopt/utils.py
… has some issues...)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (16)
.github/workflows/python-app.yaml(1 hunks)CHANGELOG.md(1 hunks)examples/00_Minmal/minimal_example.py(1 hunks)examples/01_Simple/simple_example.py(2 hunks)examples/02_Complex/complex_example.py(1 hunks)examples/02_Complex/complex_example_results.py(1 hunks)examples/03_Calculation_types/example_calculation_types.py(1 hunks)examples/04_Scenarios/scenario_example.py(2 hunks)examples/05_Two-stage-optimization/two_stage_optimization.py(2 hunks)flixopt/calculation.py(6 hunks)flixopt/config.py(8 hunks)flixopt/io.py(2 hunks)flixopt/solvers.py(6 hunks)pyproject.toml(1 hunks)tests/conftest.py(0 hunks)tests/test_config.py(10 hunks)
💤 Files with no reviewable changes (1)
- tests/conftest.py
🚧 Files skipped from review as they are similar to previous changes (1)
- pyproject.toml
🧰 Additional context used
🧬 Code graph analysis (10)
examples/05_Two-stage-optimization/two_stage_optimization.py (1)
flixopt/config.py (2)
CONFIG(78-492)exploring(464-476)
examples/02_Complex/complex_example_results.py (1)
flixopt/config.py (2)
CONFIG(78-492)exploring(464-476)
examples/01_Simple/simple_example.py (3)
flixopt/config.py (2)
CONFIG(78-492)exploring(464-476)flixopt/plotting.py (1)
plot_network(827-908)flixopt/flow_system.py (1)
plot_network(549-588)
flixopt/solvers.py (1)
flixopt/config.py (2)
CONFIG(78-492)Solving(210-232)
examples/03_Calculation_types/example_calculation_types.py (1)
flixopt/config.py (2)
CONFIG(78-492)exploring(464-476)
tests/test_config.py (2)
flixopt/config.py (8)
CONFIG(78-492)Solving(210-232)Logging(110-195)Modeling(197-208)apply(296-332)reset(273-293)load_from_file(335-354)to_dict(385-432)flixopt/solvers.py (2)
HighsSolver(67-89)GurobiSolver(45-64)
examples/02_Complex/complex_example.py (1)
flixopt/config.py (2)
CONFIG(78-492)exploring(464-476)
examples/00_Minmal/minimal_example.py (1)
flixopt/config.py (2)
CONFIG(78-492)silent(435-447)
examples/04_Scenarios/scenario_example.py (4)
flixopt/config.py (2)
CONFIG(78-492)exploring(464-476)flixopt/results.py (2)
flow_system(339-355)plot_network(1023-1043)flixopt/plotting.py (1)
plot_network(827-908)flixopt/flow_system.py (1)
plot_network(549-588)
flixopt/calculation.py (2)
flixopt/config.py (2)
CONFIG(78-492)Solving(210-232)flixopt/io.py (1)
suppress_output(940-992)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: test (3.13)
- GitHub Check: test (3.11)
- GitHub Check: test (3.12)
- GitHub Check: test (3.10)
🔇 Additional comments (9)
flixopt/io.py (1)
939-992: Well-implemented output suppression for solver logs.The context manager correctly handles OS-level file descriptor redirection to suppress C-level solver output. The nested exception handling ensures cleanup happens even if restoration fails, and the thread-safety warning is appropriately documented.
.github/workflows/python-app.yaml (1)
27-28: Appropriate headless rendering configuration for CI.The environment variables correctly configure non-interactive backends for both matplotlib and plotly, preventing automatic plot windows during test execution.
examples/03_Calculation_types/example_calculation_types.py (1)
14-15: Clean migration to the new configuration API.The use of
fx.CONFIG.exploring()appropriately enables verbose output for this demonstration example, replacing the previous manual configuration steps.examples/00_Minmal/minimal_example.py (1)
12-12: Appropriate use of silent mode for minimal example.The
fx.CONFIG.silent()call is well-suited for this minimal example, demonstrating the new silent framework capability while keeping the example output clean.examples/05_Two-stage-optimization/two_stage_optimization.py (2)
19-20: Consistent configuration pattern for exploration example.The use of
fx.CONFIG.exploring()is appropriate for this two-stage optimization example where users benefit from detailed solver output and result logging.
138-138: Direct output for user-facing message.The change from
logger.infotoCHANGELOG.md (1)
54-68: Comprehensive changelog documentation.The changelog entry clearly documents the new
CONFIG.Solvingsection, its default values, and how solvers integrate with these defaults. This provides users with the information needed to adopt the new configuration API.examples/02_Complex/complex_example_results.py (1)
8-9: Appropriate configuration for interactive results exploration.The use of
fx.CONFIG.exploring()is well-suited for this results visualization script, enabling interactive plotting and verbose output for result analysis.examples/02_Complex/complex_example.py (1)
12-13: Consistent configuration pattern across examples.The migration to
fx.CONFIG.exploring()maintains consistency with other example files, providing appropriate verbosity for this complex demonstration.
| def browser_plotting(cls) -> type[CONFIG]: | ||
| """Configure for interactive usage with plotly to open plots in browser. | ||
| Sets plotly.io.renderers.default = 'browser'. Useful for running examples | ||
| and viewing interactive plots. Does NOT modify CONFIG.Plotting settings. | ||
| """ | ||
| cls.Plotting.default_show = True | ||
| cls.apply() | ||
|
|
||
| import plotly.io as pio | ||
|
|
||
| pio.renderers.default = 'browser' | ||
|
|
||
| return cls |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard browser_plotting() against missing Plotly
CONFIG.browser_plotting() now imports plotly.io unconditionally, and the examples call CONFIG.exploring(), which forwards to this helper. On installations without the optional Plotly dependency, the ImportError stops the program before any logging or plotting defaults are applied. That’s a regression: these scripts used to run without Plotly, and Plotly is still documented as optional. Please catch the import failure and degrade gracefully (e.g., log a warning and return) so that users without Plotly keep working functionality.
- import plotly.io as pio
-
- pio.renderers.default = 'browser'
+ try:
+ import plotly.io as pio
+ except ModuleNotFoundError:
+ logger.warning(
+ 'Plotly is not installed. Skipping browser renderer setup. '
+ 'Install plotly or run CONFIG.browser_plotting() after installing it.'
+ )
+ return cls
+
+ pio.renderers.default = 'browser'🤖 Prompt for AI Agents
In flixopt/config.py around lines 479 to 492, the browser_plotting() helper
imports plotly.io unconditionally which raises ImportError on installations
without Plotly; wrap the import and renderer assignment in a try/except
ImportError block, on ImportError log a clear warning (use the module logger or
logging.getLogger(__name__).warning) and simply return cls so the function
degrades gracefully; keep the existing cls.Plotting.default_show = True and
cls.apply() behavior before the import so defaults are still applied even when
Plotly is absent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
flixopt/calculation.py (1)
586-624: Respect silent configuration when instantiating the progress bar.
CONFIG.silent()flips bothCONFIG.Logging.consoleandCONFIG.Solving.log_to_consoletoFalse, but this progress bar still prints to stdout unconditionally. That defeats the purpose of the silent mode introduced in this PR and regresses the “silent framework” goal from issue #402. Please disable or skip the tqdm bar whenever the solving/logging settings request silence.Example patch:
progress_bar = tqdm( enumerate(self.sub_calculations), total=len(self.sub_calculations), desc='Solving segments', unit='segment', file=sys.stdout, # Force tqdm to write to stdout instead of stderr + disable=not CONFIG.Solving.log_to_console, )
🧹 Nitpick comments (2)
examples/05_Two-stage-optimization/two_stage_optimization.py (1)
138-138: Consider using the framework logger instead of print statements.The switch from
logger.info()toprint()is a step backward from logging best practices. Usingprint()bypasses the framework's centralized logging configuration and reduces flexibility (no log levels, filtering, or formatting).Consider using the framework logger:
- print('Sizes were correctly equalized') + logger.info('Sizes were correctly equalized')You'll need to add at the top of the file:
import logging logger = logging.getLogger('flixopt')This would respect the
CONFIG.Loggingsettings and provide better integration with the framework's logging system.flixopt/io.py (1)
939-992: Implementation is cross-platform compatible; consider adding tests.The
suppress_output()context manager is verified to work cross-platform. File descriptor operations used (os.dup,os.dup2,os.open) have been supported on both Unix and Windows since Python 3.2+, and the codebase targets Python 3.11, well beyond this requirement.os.devnullis also cross-platform.The function is actively used in
calculation.py:619. However, no tests exist forsuppress_output()in the test suite, despitetests/test_io.pybeing present. Consider adding tests to verify the suppression behavior on different platforms.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (16)
.github/workflows/python-app.yaml(1 hunks)CHANGELOG.md(1 hunks)examples/00_Minmal/minimal_example.py(1 hunks)examples/01_Simple/simple_example.py(2 hunks)examples/02_Complex/complex_example.py(1 hunks)examples/02_Complex/complex_example_results.py(1 hunks)examples/03_Calculation_types/example_calculation_types.py(1 hunks)examples/04_Scenarios/scenario_example.py(2 hunks)examples/05_Two-stage-optimization/two_stage_optimization.py(2 hunks)flixopt/calculation.py(6 hunks)flixopt/config.py(8 hunks)flixopt/io.py(2 hunks)flixopt/solvers.py(6 hunks)pyproject.toml(1 hunks)tests/conftest.py(0 hunks)tests/test_config.py(10 hunks)
💤 Files with no reviewable changes (1)
- tests/conftest.py
🧰 Additional context used
🧬 Code graph analysis (10)
examples/04_Scenarios/scenario_example.py (3)
flixopt/config.py (2)
CONFIG(78-492)exploring(464-476)flixopt/plotting.py (1)
plot_network(827-908)flixopt/flow_system.py (1)
plot_network(549-588)
examples/05_Two-stage-optimization/two_stage_optimization.py (1)
flixopt/config.py (2)
CONFIG(78-492)exploring(464-476)
examples/01_Simple/simple_example.py (4)
flixopt/config.py (2)
CONFIG(78-492)exploring(464-476)flixopt/results.py (2)
flow_system(339-355)plot_network(1023-1043)flixopt/plotting.py (1)
plot_network(827-908)flixopt/flow_system.py (1)
plot_network(549-588)
examples/03_Calculation_types/example_calculation_types.py (1)
flixopt/config.py (2)
CONFIG(78-492)exploring(464-476)
examples/02_Complex/complex_example.py (1)
flixopt/config.py (2)
CONFIG(78-492)exploring(464-476)
examples/02_Complex/complex_example_results.py (1)
flixopt/config.py (2)
CONFIG(78-492)exploring(464-476)
examples/00_Minmal/minimal_example.py (1)
flixopt/config.py (2)
CONFIG(78-492)silent(435-447)
tests/test_config.py (2)
flixopt/config.py (8)
CONFIG(78-492)Solving(210-232)Logging(110-195)Modeling(197-208)apply(296-332)reset(273-293)load_from_file(335-354)to_dict(385-432)flixopt/solvers.py (2)
HighsSolver(67-89)GurobiSolver(45-64)
flixopt/solvers.py (1)
flixopt/config.py (2)
CONFIG(78-492)Solving(210-232)
flixopt/calculation.py (3)
flixopt/solvers.py (1)
_Solver(17-42)flixopt/config.py (2)
CONFIG(78-492)Solving(210-232)flixopt/io.py (1)
suppress_output(940-992)
🔇 Additional comments (8)
pyproject.toml (1)
44-44: LGTM! Good dependency addition.The tqdm library is a mature, well-maintained tool for progress bars, and the version constraint appropriately blocks potential breaking changes in version 5.0 while allowing patch and minor updates.
.github/workflows/python-app.yaml (1)
27-28: LGTM! Proper headless plotting configuration for CI.Setting non-interactive backends at the workflow level ensures consistent behavior across all CI jobs and prevents plot windows from attempting to open in the headless CI environment.
examples/00_Minmal/minimal_example.py (1)
12-12: LGTM! Cleaner configuration API.The new
CONFIG.silent()method provides a more intuitive and concise way to suppress output, replacing the previous two-line pattern. This is particularly appropriate for a minimal example demonstrating production-ready usage.examples/03_Calculation_types/example_calculation_types.py (1)
14-14: LGTM! Appropriate configuration mode for example.The
CONFIG.exploring()mode is well-suited for this example script as it enables informative logging and interactive plotting, helping users understand the different calculation types being demonstrated.examples/04_Scenarios/scenario_example.py (2)
11-11: LGTM! Consistent with other examples.Adding
CONFIG.exploring()enables appropriate logging and plotting behavior for this interactive example.
191-191: LGTM! Cleaner API usage.Removing the explicit
show=Trueparameter leverages the centralized configuration set byCONFIG.exploring(), which includes enabling browser plotting by default. This reduces redundancy and makes the configuration flow clearer.examples/05_Two-stage-optimization/two_stage_optimization.py (1)
19-19: LGTM! Consistent configuration approach.Using
CONFIG.exploring()aligns with other examples and provides appropriate logging/plotting defaults.CHANGELOG.md (1)
54-68: LGTM! Clear and comprehensive changelog entry.The changelog properly documents the new
CONFIG.Solvingconfiguration section with:
- Clear listing of all new configuration parameters
- Default values for each parameter
- Notes on solver integration and override behavior
This directly addresses the PR objectives from issue #402 to provide centralized solver configuration.
| fx.CONFIG.exploring() | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard CONFIG.exploring() against missing Plotly
Line 11 now calls fx.CONFIG.exploring(), which immediately runs CONFIG.browser_plotting() (see flixopt/config.py lines 476-491). That helper unconditionally imports plotly.io; if Plotly is not installed—which was previously fine for this quick-start example—the import raises and the script aborts before doing anything. This regressions affects every entrypoint that now uses CONFIG.exploring(). Please make the helper tolerant of missing Plotly (e.g., catch the ImportError and fall back to default_show=False) before switching the examples to it.
Apply this change in flixopt/config.py:
@classmethod
def browser_plotting(cls) -> type[CONFIG]:
- cls.Plotting.default_show = True
- cls.apply()
-
- import plotly.io as pio
-
- pio.renderers.default = 'browser'
+ try:
+ import plotly.io as pio
+ except ImportError:
+ cls.Plotting.default_show = False
+ cls.apply()
+ return cls
+
+ cls.Plotting.default_show = True
+ cls.apply()
+ pio.renderers.default = 'browser'This keeps the new configuration API but avoids breaking environments without Plotly. Based on relevant snippet.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| fx.CONFIG.exploring() | |
| @classmethod | |
| def browser_plotting(cls) -> type[CONFIG]: | |
| try: | |
| import plotly.io as pio | |
| except ImportError: | |
| cls.Plotting.default_show = False | |
| cls.apply() | |
| return cls | |
| cls.Plotting.default_show = True | |
| cls.apply() | |
| pio.renderers.default = 'browser' |
🤖 Prompt for AI Agents
In flixopt/config.py around the browser_plotting() helper (referenced by
examples/01_Simple/simple_example.py lines 11-12), the function unconditionally
imports plotly.io which raises ImportError when Plotly is not installed and
aborts the script; modify browser_plotting() to catch ImportError (or
ModuleNotFoundError), and when caught set default_show=False (and optionally
emit a debug/warn message) so CONFIG.exploring() becomes tolerant of missing
Plotly and won’t break quick-start examples.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
flixopt/calculation.py (1)
585-627: Ensure progress bar cleanup on exceptions.The progress bar is created on line 586 and closed on line 627, but if an exception occurs during the loop (lines 595-626), the
close()call may be skipped, potentially leaving the progress bar in an inconsistent state or leaking resources.Wrap the progress bar usage in a try/finally block to ensure cleanup:
- # Create tqdm progress bar with custom format that prints to stdout - progress_bar = tqdm( - enumerate(self.sub_calculations), - total=len(self.sub_calculations), - desc='Solving segments', - unit='segment', - file=sys.stdout, # Force tqdm to write to stdout instead of stderr - disable=not CONFIG.Solving.log_to_console, # Respect silent configuration - ) - - for i, calculation in progress_bar: - # Update progress bar description with current segment info - progress_bar.set_description( - f'Solving ({calculation.flow_system.timesteps[0]} -> {calculation.flow_system.timesteps[-1]})' - ) - - if i > 0 and self.nr_of_previous_values > 0: - self._transfer_start_values(i) - - calculation.do_modeling() - - # Warn about Investments, but only in fist run - if i == 0: - invest_elements = [ - model.label_full - for component in calculation.flow_system.components.values() - for model in component.submodel.all_submodels - if isinstance(model, InvestmentModel) - ] - if invest_elements: - logger.critical( - f'Investments are not supported in Segmented Calculation! ' - f'Following InvestmentModels were found: {invest_elements}' - ) - - with fx_io.suppress_output(): - calculation.solve( - solver, - log_file=pathlib.Path(log_file) if log_file is not None else self.folder / f'{self.name}.log', - log_main_results=log_main_results, - ) - - progress_bar.close() + # Create tqdm progress bar with custom format that prints to stdout + progress_bar = tqdm( + enumerate(self.sub_calculations), + total=len(self.sub_calculations), + desc='Solving segments', + unit='segment', + file=sys.stdout, # Force tqdm to write to stdout instead of stderr + disable=not CONFIG.Solving.log_to_console, # Respect silent configuration + ) + + try: + for i, calculation in progress_bar: + # Update progress bar description with current segment info + progress_bar.set_description( + f'Solving ({calculation.flow_system.timesteps[0]} -> {calculation.flow_system.timesteps[-1]})' + ) + + if i > 0 and self.nr_of_previous_values > 0: + self._transfer_start_values(i) + + calculation.do_modeling() + + # Warn about Investments, but only in fist run + if i == 0: + invest_elements = [ + model.label_full + for component in calculation.flow_system.components.values() + for model in component.submodel.all_submodels + if isinstance(model, InvestmentModel) + ] + if invest_elements: + logger.critical( + f'Investments are not supported in Segmented Calculation! ' + f'Following InvestmentModels were found: {invest_elements}' + ) + + with fx_io.suppress_output(): + calculation.solve( + solver, + log_file=pathlib.Path(log_file) if log_file is not None else self.folder / f'{self.name}.log', + log_main_results=log_main_results, + ) + finally: + progress_bar.close()Alternatively, use the progress bar as a context manager (if tqdm supports it):
with tqdm(...) as progress_bar: for i, calculation in progress_bar: ...
🧹 Nitpick comments (1)
tests/test_io.py (1)
170-188: Test validates C-level write handling, though verification is implicit.The test confirms that C-level writes to file descriptors during suppression don't cause errors and that streams remain functional afterward. While the test doesn't explicitly assert that output was suppressed, any unsuppressed output would be visible in test logs, making this an acceptable smoke test.
If you want stronger verification, you could capture the test process output and assert that the C-level writes don't appear:
def test_suppress_output_c_level(): """Test that suppress_output() suppresses C-level output (file descriptor level).""" import os import subprocess import sys # Run a subprocess that writes to FDs during suppression result = subprocess.run( [sys.executable, '-c', ''' import os from flixopt.io import suppress_output with suppress_output(): os.write(1, b"C-level stdout\\n") os.write(2, b"C-level stderr\\n") print("After suppression") '''], capture_output=True, text=True ) assert "C-level stdout" not in result.stdout assert "C-level stderr" not in result.stderr assert "After suppression" in result.stdoutHowever, the current implementation is sufficient for most use cases.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
examples/05_Two-stage-optimization/two_stage_optimization.py(1 hunks)flixopt/calculation.py(6 hunks)tests/test_io.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
tests/test_io.py (1)
flixopt/io.py (1)
suppress_output(940-992)
examples/05_Two-stage-optimization/two_stage_optimization.py (1)
flixopt/config.py (2)
CONFIG(78-492)exploring(464-476)
flixopt/calculation.py (3)
flixopt/solvers.py (1)
_Solver(17-42)flixopt/config.py (2)
CONFIG(78-492)Solving(210-232)flixopt/io.py (1)
suppress_output(940-992)
🔇 Additional comments (9)
examples/05_Two-stage-optimization/two_stage_optimization.py (1)
22-23: LGTM! Exploration mode is appropriately configured for this example.The addition of
fx.CONFIG.exploring()aligns with the PR's goal of providing configuration-driven defaults. This enables console logging, solver output, and browser plotting, which are appropriate for an interactive example script.tests/test_io.py (3)
83-117: LGTM! Test correctly validates file descriptor redirection.The test properly verifies that
suppress_output()redirects file descriptor 1 (stdout) to/dev/nullduring the context and restores it afterward. The test logic correctly captures output before, during, and after suppression.
119-143: LGTM! Test validates Python-level stream functionality after suppression.The test correctly verifies that Python-level stdout remains functional after exiting the
suppress_output()context, ensuring proper cleanup and restoration.
145-168: LGTM! Test ensures exception safety of stream restoration.The test properly validates that
suppress_output()restores file descriptors correctly even when an exception is raised within the context, confirming the cleanup logic in the finally block works as intended.flixopt/calculation.py (5)
231-231: LGTM! Parameter change enables configuration-driven defaults.Changing
log_main_resultsfrombooltobool | Noneallows callers to defer toCONFIG.Solving.log_main_resultswhenNoneis passed, aligning with the PR's goal of centralized configuration. The change is backward compatible.
241-242: LGTM! Improved logging for solver completion.The added logging statements provide useful feedback about solving duration and final model status, enhancing observability without changing behavior.
256-256: LGTM! Correctly implements default value fallback.The logic properly uses the provided
log_main_resultsparameter when notNone, otherwise falls back toCONFIG.Solving.log_main_results. This implements the intended configuration hierarchy.
579-579: LGTM! Default behavior changed to align with silent framework.The default value for
log_main_resultschanged fromTruetoFalse, implementing the PR's objective of reducing output by default. Users can explicitly passTrueif they want detailed logging.
633-633: LGTM! Completion logging provides useful high-level summary.The logging statement provides a concise summary of the overall solving process, complementing the per-segment progress tracking.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
tests/test_io.py (2)
83-117: Consider adding inline comments to clarify the multi-level FD redirection.The test logic is correct and the resource cleanup is properly handled. However, the multi-level file descriptor redirection (original stdout → test file → /dev/null → test file → original stdout) is complex and could benefit from inline comments explaining each redirection step for future maintainers.
Example clarification:
with open(test_file, 'w') as f: original_stdout_fd = os.dup(1) # Save original stdout FD try: - # Redirect FD 1 to our test file + # Step 1: Redirect FD 1 (stdout) from terminal to our test file os.dup2(f.fileno(), 1) os.write(1, b'before suppression\n') with suppress_output(): - # Inside suppress_output, writes should go to /dev/null, not our file + # Step 2: suppress_output saves current FD 1 (test file) and redirects to /dev/null + # This write goes to /dev/null, not our test file os.write(1, b'during suppression\n') - # After suppress_output, writes should go to our file again + # Step 3: suppress_output restores FD 1 back to test file os.write(1, b'after suppression\n') finally: - # Restore original stdout + # Step 4: Restore FD 1 back to original terminal os.dup2(original_stdout_fd, 1) os.close(original_stdout_fd)
170-188: Consider adding assertions to verify suppression actually occurs.The test is a valid smoke test that ensures C-level writes don't crash, but it lacks assertions to verify that output is actually suppressed. Consider capturing output or using a temporary file to verify the C-level writes don't produce visible output.
Example improvement:
def test_suppress_output_c_level(tmp_path): """Test that suppress_output() suppresses C-level output (file descriptor level).""" import os import sys from flixopt.io import suppress_output # Redirect stdout to a temporary file to capture output test_file = tmp_path / 'c_level_output.txt' with open(test_file, 'w') as f: original_stdout_fd = os.dup(1) try: os.dup2(f.fileno(), 1) # Write before suppression os.write(1, b'before\n') with suppress_output(): # These writes should be suppressed os.write(1, b'C-level stdout write\n') os.write(2, b'C-level stderr write\n') # Write after suppression os.write(1, b'after\n') finally: os.dup2(original_stdout_fd, 1) os.close(original_stdout_fd) # Verify C-level writes were suppressed content = test_file.read_text() assert 'before' in content assert 'C-level stdout write' not in content assert 'after' in contentflixopt/calculation.py (1)
255-255: Consider extracting the conditional for readability.The inline ternary works but is a bit dense. Extracting to a variable would improve clarity:
+ should_log_results = log_main_results if log_main_results is not None else CONFIG.Solving.log_main_results - if log_main_results if log_main_results is not None else CONFIG.Solving.log_main_results: + if should_log_results:
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
flixopt/calculation.py(5 hunks)tests/test_io.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
tests/test_io.py (1)
flixopt/io.py (1)
suppress_output(940-992)
flixopt/calculation.py (3)
flixopt/solvers.py (1)
_Solver(17-42)flixopt/config.py (2)
CONFIG(78-492)Solving(210-232)flixopt/io.py (1)
suppress_output(940-992)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: test (3.13)
- GitHub Check: test (3.12)
- GitHub Check: test (3.11)
- GitHub Check: test (3.10)
🔇 Additional comments (9)
tests/test_io.py (3)
119-143: LGTM!The test correctly verifies that Python-level streams remain functional after exiting the suppress_output context. Resource management is proper with the try/finally block.
145-168: LGTM!The test effectively verifies that suppress_output restores file descriptors correctly even when an exception occurs within the context. The combination of FD number assertions and functional verification with stdout.write() provides good coverage.
190-224: LGTM!The test thoroughly validates the tqdm cleanup pattern with proper exception handling. The nested try blocks correctly track both exception occurrence and cleanup execution, and testing close() idempotency is a good practice. Using
disable=Trueappropriately prevents unwanted output during test execution.flixopt/calculation.py (6)
16-16: LGTM! New imports support silent framework and progress tracking.The
sysandtqdmimports are correctly used for progress bar implementation (line 590) and silent mode support, aligning with the PR's silent framework objectives.Also applies to: 24-24
230-230: Good design: Allow None to inherit from CONFIG.The signature change to
bool | Noneenables callers to defer to centralized configuration when not explicitly specified, supporting the silent framework feature while maintaining backward compatibility.
240-242: LGTM! Post-solve logging improves observability.The immediate logging of solver name, duration, and status provides helpful feedback and aligns with the centralized configuration approach.
584-592: LGTM! Progress bar integration respects silent mode.The tqdm setup correctly:
- Writes to stdout for consistency
- Respects
CONFIG.Solving.log_to_consoleto disable in silent mode- Provides useful feedback for long-running segmented calculations
633-633: LGTM! Summary logging provides useful end-to-end visibility.The final duration log (outside the suppression context) gives users clear feedback on total solving time across all segments.
594-627: Verification complete: Exception handling and error visibility are properly implemented.The
suppress_output()context manager (flixopt/io.py:940-989) uses a generator-based approach with a finally block that restores file descriptors viadup2()before any exception propagates. This ensures exception messages fromcalculation.solve()remain fully visible despite output suppression.Additionally, critical investment warnings are logged via
logger.critical()at line 616, before thesuppress_output()block begins at line 620, ensuring they are never suppressed.The code correctly addresses both concerns raised in the review comment.
…r not show solver output
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
flixopt/calculation.py (1)
255-255: Consider extracting the nested ternary for readability.The nested ternary operator
if log_main_results if log_main_results is not None else CONFIG.Solving.log_main_results:is correct but difficult to parse at a glance.Apply this diff to improve clarity:
- if log_main_results if log_main_results is not None else CONFIG.Solving.log_main_results: + should_log_results = log_main_results if log_main_results is not None else CONFIG.Solving.log_main_results + if should_log_results:
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
CHANGELOG.md(1 hunks)flixopt/calculation.py(5 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
flixopt/calculation.py (4)
flixopt/solvers.py (1)
_Solver(17-42)flixopt/config.py (2)
CONFIG(78-492)Solving(210-232)flixopt/io.py (1)
suppress_output(940-992)flixopt/features.py (1)
InvestmentModel(24-144)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: test (3.10)
- GitHub Check: test (3.12)
- GitHub Check: test (3.13)
- GitHub Check: test (3.11)
🔇 Additional comments (6)
CHANGELOG.md (1)
54-68: LGTM! Clear documentation of new CONFIG.Solving feature.The documentation clearly describes the new centralized solver parameter management, including all four parameters and their defaults, as well as the flexibility for per-instance overrides.
flixopt/calculation.py (5)
16-16: LGTM! Imports are properly used.The
sysimport is used for the progress bar'sfileparameter (line 660), andtqdmis used for the progress bar implementation (line 655).Also applies to: 24-24
230-230: LGTM! Parameter signature change enables CONFIG fallback.The change from
bool = Truetobool | None = Noneallows the method to respect the centralizedCONFIG.Solving.log_main_resultsdefault while still supporting explicit overrides.
240-241: LGTM! Improved observability with solve logging.The added logging provides immediate feedback about solve duration and model status, which is helpful for debugging and monitoring.
574-610: LGTM! Well-structured helper method for segment solving.The
_solve_single_segmenthelper cleanly encapsulates the per-segment solving logic with proper:
- Start value transfer between segments (lines 584-585)
- Investment warning on first segment only (lines 590-601)
- Conditional output suppression (lines 605-609)
611-681: LGTM! Excellent implementation of dual output modes.The method properly implements two execution paths:
- Path 1 (lines 645-652): Detailed output when
show_individual_solves=True- Path 2 (lines 654-671): Compact progress bar with suppressed output (default)
Key strengths:
- Progress bar properly disabled when
CONFIG.Solving.log_to_console=False(line 661)- Robust cleanup with try/finally block (lines 664-671)
- Comprehensive docstring explaining both modes (lines 618-640)
- Final summary logging of total duration (line 677)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
flixopt/calculation.py (1)
255-255: Consider extracting the nested ternary for readability.The nested ternary expression is correct but challenging to parse at a glance. Consider refactoring for clarity:
+ should_log = log_main_results if log_main_results is not None else CONFIG.Solving.log_main_results - if log_main_results if log_main_results is not None else CONFIG.Solving.log_main_results: + if should_log: logger.info(
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
examples/02_Complex/complex_example_results.py(2 hunks)examples/03_Calculation_types/example_calculation_types.py(2 hunks)flixopt/calculation.py(6 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
examples/03_Calculation_types/example_calculation_types.py (4)
flixopt/config.py (2)
CONFIG(78-492)exploring(464-476)flixopt/results.py (2)
flow_system(339-355)plot_network(1023-1043)flixopt/plotting.py (1)
plot_network(827-908)flixopt/flow_system.py (1)
plot_network(549-588)
examples/02_Complex/complex_example_results.py (4)
flixopt/config.py (2)
CONFIG(78-492)exploring(464-476)flixopt/plotting.py (1)
plot_network(827-908)flixopt/flow_system.py (1)
plot_network(549-588)flixopt/results.py (1)
plot_network(1023-1043)
flixopt/calculation.py (4)
flixopt/solvers.py (1)
_Solver(17-42)flixopt/config.py (3)
CONFIG(78-492)Solving(210-232)Plotting(234-268)flixopt/io.py (1)
suppress_output(940-992)flixopt/features.py (1)
InvestmentModel(24-144)
🔇 Additional comments (11)
examples/03_Calculation_types/example_calculation_types.py (2)
14-14: LGTM! Clean adoption of the new CONFIG API.The
fx.CONFIG.exploring()call properly configures the framework for interactive exploration, enabling console logging, solver output, and browser plotting. This is the recommended approach for example scripts.
167-167: Verify the intentional change to enable controls.The simplified call now uses default arguments, but this changes behavior:
controlswent fromFalsetoTrue(default). Withcontrols=True, the PyVis network visualization will display interactive UI buttons for adjusting physics, layout, etc.For an exploration/example context, enabling controls is likely beneficial, but please confirm this behavioral change is intentional.
examples/02_Complex/complex_example_results.py (1)
8-8: LGTM! Proper migration to the new CONFIG API.The
fx.CONFIG.exploring()call correctly replaces the previous manual logging configuration, enabling the appropriate settings for interactive result exploration.flixopt/calculation.py (8)
16-16: New imports used correctly.The
sysandtqdmimports support the new progress bar functionality for segmented solving.Also applies to: 24-24
230-230: Good design: optional parameter with CONFIG fallback.Changing
log_main_resultstobool | Noneallows callers to explicitly override or fall back to the global configuration, aligning well with the new centralized config system.
240-241: Useful logging additions.Logging the solve duration and model status immediately after solving provides helpful feedback for monitoring and debugging.
373-373: Plotting now respects centralized configuration.Using
CONFIG.Plotting.default_showaligns with the new configuration system for controlling plot display behavior.
574-610: Well-structured helper method.The
_solve_single_segmentmethod cleanly encapsulates the per-segment solving workflow with proper handling of start value transfer, investment warnings (first segment only), and conditional output suppression.
611-672: Clear dual-path design with proper cleanup.The updated method provides two well-separated execution paths: detailed individual solve output and a compact progress bar mode. The
finallyblock ensures proper progress bar cleanup.
661-661: Verify progress bar disable logic for user experience.When
CONFIG.Solving.log_to_consoleisFalseandshow_individual_solvesisFalse, the progress bar is disabled and solver output is suppressed, leaving no visual feedback during potentially long-running segmented calculations. Is this the intended behavior for silent mode, or should the progress bar remain visible as the only progress indicator?Additionally, Line 677 logs the solve summary unconditionally, which seems inconsistent with disabling the progress bar based on
log_to_console. Consider whether the final summary should also respect this configuration.
677-678: Good summary logging after segmented solve.Logging the total solving duration provides a clear summary of the end-to-end segmented calculation performance.
Description
Brief description of the changes in this PR.
Type of Change
Related Issues
Closes #402
Testing
Checklist
Summary by CodeRabbit
New Features
Improvements
Tests / Chores