-
Notifications
You must be signed in to change notification settings - Fork 74
Switch testing to pytest add benchmark test #70
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Use pytest-benchmark so that regressions can be tracked in future.
|
cc @llllllllll |
multipledispatch/tests/test_core.py
Outdated
|
|
||
|
|
||
| # Hacky method to get benchmnarks namespaced | ||
| class TestBenchMark(object): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is this for?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Basically a holder to stick generated tests into, with decent names so that pytest can discover them appropriately.
sadly pytest-benchmark doesn't just provide a decorator.
|
It might be nice to separate benchmarks from tests. |
|
We can also have a proper benchmark, this probably doesn't benchmark all that well since a reasonable amount of the code being called is pytest assert rewrite stuff. For initial i can just make the benchmark we have right now be the benchmark |
|
I agree that we shouldn't feel the need to make a benchmark suite in order to make progress here. Generally some care was taken to keep multipledispatch calls cheap. It would be good to understand where this change might break that previous work. |
|
So this works now, but as part of doing this I had to drop py26 and py32. (Added py37 and pypy3 to the test suite) |
|
ping @mrocklin @llllllllll |
|
It is unlikely that I will look at this over the next few days
(conferencing). I'm marking it as unread but it's possible that I will
forget about it. I recommend re-pinging on Monday if I have not
responded. @llllllllll if you have time to look at this that would be
welcome.
…On Mon, Jan 8, 2018 at 4:07 PM, Marius van Niekerk ***@***.*** > wrote:
ping @mrocklin <https://github.com/mrocklin> @llllllllll
<https://github.com/llllllllll>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#70 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AASszPXJPsq1FO2PmeAD-Uo3Ap6hsDn9ks5tIpGGgaJpZM4RNzgb>
.
|
| benchmark(isint, val) | ||
|
|
||
|
|
||
| @pytest.mark.parametrize("val", [(1, 4)]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is the parametrized over a single call?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are actually two arguments here. 1 and 'a'
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly so that if we add more complex examples we can see which kinds of types are slower for tests. You can only use a benchmark once per test for some reason
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, so it is because you plan on adding more cases?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah a large part of the reason I want to do this is for the fancier pytypes based checks in #69 where can dispatch on typed containers. That will potentially be slow.
|
This looks good to me, I had a question about the parametrization but otherwise this looks like a reasonable start for a benchmark suite. |
|
+1 from me |
|
@mrocklin nudge |
|
Thanks @mariusvniekerk |
Use pytest-benchmark so that regressions can be tracked in future.