Skip to content

WIP: Linear multistep Patankar schemes#182

Open
SKopecz wants to merge 25 commits intomainfrom
sk/patankar_multistep
Open

WIP: Linear multistep Patankar schemes#182
SKopecz wants to merge 25 commits intomainfrom
sk/patankar_multistep

Conversation

@SKopecz
Copy link
Collaborator

@SKopecz SKopecz commented Dec 23, 2025

This implements #107.

  • MPLM22 oop.
  • MPLM33 oop
  • MPLM43 oop
  • MPLM54 oop
  • MPLM75 oop
  • MPLM106 oop
  • In-place implementations
  • Tests
  • Docs
  • Additional test problems
  • Benchmark diffusion test problem

@SKopecz
Copy link
Collaborator Author

SKopecz commented Dec 23, 2025

All the onestep MPRK/SSPMPRK and also the multistep Patankar schemes can be interpreted as sequences of MPE steps with slightly modified input data. Before continuing this PR we should implement a function like _permform_step_MPE! and rewrite the implementations of MPRK and SSPMPRK schemes. This will make the implementation of the multistep Patankar schemes easier and cleaner. The corresponding PR is #183 .

@codecov
Copy link

codecov bot commented Dec 23, 2025

Codecov Report

❌ Patch coverage is 18.91117% with 283 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/mplm.jl 0.00% 267 Missing ⚠️
src/mprk.jl 27.27% 16 Missing ⚠️

📢 Thoughts on this report? Let us know!

@coveralls
Copy link

coveralls commented Dec 23, 2025

Pull Request Test Coverage Report for Build 22068821122

Warning: This coverage report may be inaccurate.

This pull request's base commit is no longer the HEAD commit of its target branch. This means it includes changes from outside the original pull request, including, potentially, unrelated coverage changes.

Details

  • 66 of 349 (18.91%) changed or added relevant lines in 3 files are covered.
  • 14 unchanged lines in 3 files lost coverage.
  • Overall coverage decreased (-12.6%) to 84.915%

Changes Missing Coverage Covered Lines Changed/Added Lines %
src/mprk.jl 6 22 27.27%
src/mplm.jl 0 267 0.0%
Files with Coverage Reduction New Missed Lines %
src/sspmprk.jl 1 99.37%
src/mprk.jl 3 95.61%
src/mpdec.jl 10 97.07%
Totals Coverage Status
Change from base Build 21465719007: -12.6%
Covered Lines: 1852
Relevant Lines: 2181

💛 - Coveralls

export MPLM22

export prob_pds_linmod, prob_pds_linmod_inplace, prob_pds_nonlinmod,
prob_pds_robertson, prob_pds_brusselator, prob_pds_sir,
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new problems need to be added here, to be found in the tests.

@SKopecz
Copy link
Collaborator Author

SKopecz commented Feb 16, 2026

The authors of the original paper kindly shared their code; consequently, the current out-of-place implementations of MPLM22, MPLM33, and MPLM43 yield the same results as those in the paper.

However, the implementation of the startup phase for these m-step methods is quite cumbersome and difficult to maintain. To achieve the required accuracy for the starting values, the authors utilize a sub-stepping strategy (dt/4) with nested lower-order MPLM schemes. This currently necessitates a redundant implementation: one for the standard perform_step! and another for a perform_substeps_MPLMXX function, which is used within the perform_step! of higher-order methods to compute initial values. This "double implementation" is error-prone and leads to significant code duplication.

Should we keep this sub-stepping strategy? If so, is there a way to eliminate the "double implementation"? Alternatively, we could use MPRK or MPDeC one-step schemes to compute the initial values. Since these are available for all orders, the reduced time step would not be necessary. However, this would also require implementations of these methods that can be integrated into the MPLM perform_step!.

@ranocha @JoshuaLampert Do you have any suggestions?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants