-
Notifications
You must be signed in to change notification settings - Fork 27
Fix benchmarking CI job #154
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
fe55f78 to
92fdcec
Compare
|
Accidentally, I added a commit to this PR that I intended to add to my other PR, so I revoked it with a forced push. |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #154 +/- ##
==========================================
- Coverage 94.91% 90.86% -4.05%
==========================================
Files 78 77 -1
Lines 2477 2376 -101
==========================================
- Hits 2351 2159 -192
- Misses 126 217 +91 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
I reviewed the logs of the 3 failed actions, and here are my findings:
I also opened this PR on my fork, and all actions completed successfully. Here is a sample of the output of AirspeedVelocity.jl in this comment from the PR on my fork: hakkelt#3 (comment) Anyway, @lostella, what do you think about this package/solution? I could not find out what the output of the previous solution looked like because the very old GitHub Actions runs were deleted, and all the more recent runs failed. |
|
@hakkelt AirspeedVelocity is interesting, after a quick look. I will check out this branch locally and try it to get a feeling for how it works. But assuming that it's fine, I would be ok with the change. What's important is that benchmarks run correctly in GH workflows in some way, and that regressions are spotted. I restarted the 1.9 macOS tests, which appear to have succeeded now. I guess one thing we should consider (maybe in a separate PR) is to bump Julia versions to 1 + LTS in the test workflow? I believe the codecov stuff we could remove. (Also this, separately) |
|
The benchmark report comment looks great btw |
When trying to fix the failing benchmarking CI job, I found AirspeedVelocity.jl. I tried running the benchmarks with it, and I found it more intuitive, both when running benchmarks locally from the CLI and when writing the configuration for the GitHub action. Additionally, it prints the benchmarking results into a comment. This package also removes the need for the custom benchmark-running script
runbenchmarks.jl.