Skip to content

Comments

Fix logic issues in predator prey model and tune parameters#27

Open
tkoskela wants to merge 8 commits intocarpentries-incubator:mainfrom
tkoskela:tk/predprey-params
Open

Fix logic issues in predator prey model and tune parameters#27
tkoskela wants to merge 8 commits intocarpentries-incubator:mainfrom
tkoskela:tk/predprey-params

Conversation

@tkoskela
Copy link

@tkoskela tkoskela commented Jan 28, 2026

I sat down with @JostMigenda in a workshop and had a go at tuning the parameters of the predator prey model to get it to oscillate between predator and prey populations. It is not quite there yet, the prey still all die, but works a bit better.

image

@github-actions
Copy link

github-actions bot commented Jan 28, 2026

Thank you!

Thank you for your pull request 😃

🤖 This automated message can help you check the rendered files in your submission for clarity. If you have any questions, please feel free to open an issue in {sandpaper}.

If you have files that automatically render output (e.g. R Markdown), then you should check for the following:

  • 🎯 correct output
  • 🖼️ correct figures
  • ❓ new warnings
  • ‼️ new errors

Rendered Changes

🔍 Inspect the changes: https://github.com/carpentries-incubator/pando-python/compare/md-outputs..md-outputs-PR-27

The following changes were observed in the rendered markdown documents:

 fig/predprey_out.png              | Bin 69427 -> 91134 bytes
 files/pred-prey/predprey.py       | 101 ++++++++++++++++++--------------------
 files/pred-prey/predprey.py.lprof | Bin 345 -> 369 bytes
 md5sum.txt                        |   4 +-
 profiling-functions.md            |   8 +--
 profiling-lines.md                |  44 ++++++++---------
 6 files changed, 77 insertions(+), 80 deletions(-)
What does this mean?

If you have source files that require output and figures to be generated (e.g. R Markdown), then it is important to make sure the generated figures and output are reproducible.

This output provides a way for you to inspect the output in a diff-friendly manner so that it's easy to see the changes that occur due to new software versions or randomisation.

⏱️ Updated at 2026-02-21 08:18:31 +0000

@JostMigenda
Copy link
Collaborator

Thanks @tkoskela for looking into this. Yep—it’s very tricky to tune the parameters to give nice results, but this is certainly a big improvement over the previous version.

A couple of notes:

  • This includes the bugfix commit from fix logic issues in predprey example #17. Makes sense to keep bugfixes and tuning in one PR; we just shouldn’t get confused by this later.
  • Before we merge this, I should check the episode text & exercises for consistency; some of the profiling outcomes we mention there will have changed as a result of these changes.
  • Finally, very minor notes on the code itself:
    • The comments look like LLM slop: vaguely plausible, but sometimes contradicting the actual changes in the PR; and I don’t think they add much over the parameter names themselves
    • Adding extra spaces around = for alignment: I know there are different opinions around this, but the Python convention is not to add them

@Robadob
Copy link
Collaborator

Robadob commented Jan 28, 2026

More than happy to defer to @JostMigenda's judgement on approving this, looks reasonable at a glance. Please, remember to briefly review the instructor notes/solution incase anything subtle has changed there.

I think the original code was heavily inspired by our FLAMEGPU Python tutorial, which is probably inspired by a common simple agent-based model. Though that subtle bug looks like something FLAMEGPU would handle natively so it's almost certainly my mistake.

https://github.com/FLAMEGPU/FLAMEGPU2-tutorial-python

Thanks for supporting this project!

@Robadob

This comment was marked as outdated.

@tkoskela
Copy link
Author

Finally, very minor notes on the code itself:

  • The comments look like LLM slop: vaguely plausible, but sometimes contradicting the actual changes in the PR; and I don’t think they add much over the parameter names themselves
  • Adding extra spaces around = for alignment: I know there are different opinions around this, but the Python convention is not to add them

Yeah, I was using Copilot for tuning the parameters. I get your point about the comments, they're not very useful and I've removed them and fixed the formatting.

github-actions bot pushed a commit that referenced this pull request Feb 21, 2026
@JostMigenda JostMigenda requested a review from Robadob February 21, 2026 03:18
@JostMigenda JostMigenda changed the title Tune predator prey model parameters Fix logic issues in predator prey model and tune parameters Feb 21, 2026
@JostMigenda
Copy link
Collaborator

The function-level profiling results remained qualitatively fairly similar, so I made just a few small tweaks to the text but kept screenshots of profiling results unchanged. Line-level profiling results changed a bit more, so I had to update the outputs as well as the exercise text.

These commits should now cover all follow-on changes resulting from the logic fix and is ready for review.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah this looks so much better.

if c:
children.append(c)
self.predators.extend(children)
self.prey.extend(children)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still such a dumb mistake.

```

Given that the following line 289 only has a relative 0.6% time, it can be understood that the vast majority of times the condition `prey.life < PREY_HUNGER_THRESH` is evaluated it does not proceed.
Given that the following line 289 only has a relative 4.8% time, it can be understood that the vast majority of times the condition `prey.life < PREY_HUNGER_THRESH` is evaluated it does not proceed.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Re-reading this, perhaps it would be more correct to refer to hits than %time. 271 million vs 61m.
20% through rate, so less significant that the time difference. 🤷‍♂️

Copy link
Collaborator

@Robadob Robadob left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changes look thorough, I added one suggested tweak (more-so to the original text). However after writing the comment (profiling-lines.md:502) I'm in two minds about it. See what you think.

github-actions bot pushed a commit that referenced this pull request Feb 21, 2026
@Robadob
Copy link
Collaborator

Robadob commented Feb 21, 2026

I've rebased to resolve the two conflicts (junk conflicts around timing.monotonic at the end of the code 🤷‍♂️).

Please squash, or atleast clean up the history before merge.

Feel free to delete my contribution tags from any commit comments, all I've done is housekeeping.

github-actions bot pushed a commit that referenced this pull request Feb 21, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants