Skip to content

Conversation

@mariusvniekerk
Copy link
Collaborator

Use pytest-benchmark so that regressions can be tracked in future.

Use pytest-benchmark so that regressions can be tracked in future.
@mariusvniekerk
Copy link
Collaborator Author

cc @llllllllll



# Hacky method to get benchmnarks namespaced
class TestBenchMark(object):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is this for?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Basically a holder to stick generated tests into, with decent names so that pytest can discover them appropriately.

sadly pytest-benchmark doesn't just provide a decorator.

@mrocklin
Copy link
Owner

It might be nice to separate benchmarks from tests.

@mariusvniekerk
Copy link
Collaborator Author

We can also have a proper benchmark, this probably doesn't benchmark all that well since a reasonable amount of the code being called is pytest assert rewrite stuff.

For initial i can just make the benchmark we have right now be the benchmark

@mrocklin
Copy link
Owner

I agree that we shouldn't feel the need to make a benchmark suite in order to make progress here.

Generally some care was taken to keep multipledispatch calls cheap. It would be good to understand where this change might break that previous work.

@mariusvniekerk
Copy link
Collaborator Author

So this works now, but as part of doing this I had to drop py26 and py32. (Added py37 and pypy3 to the test suite)

@mariusvniekerk
Copy link
Collaborator Author

ping @mrocklin @llllllllll

@mrocklin
Copy link
Owner

mrocklin commented Jan 10, 2018 via email

benchmark(isint, val)


@pytest.mark.parametrize("val", [(1, 4)])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is the parametrized over a single call?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are actually two arguments here. 1 and 'a'

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly so that if we add more complex examples we can see which kinds of types are slower for tests. You can only use a benchmark once per test for some reason

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, so it is because you plan on adding more cases?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah a large part of the reason I want to do this is for the fancier pytypes based checks in #69 where can dispatch on typed containers. That will potentially be slow.

@llllllllll
Copy link
Collaborator

This looks good to me, I had a question about the parametrization but otherwise this looks like a reasonable start for a benchmark suite.

@llllllllll
Copy link
Collaborator

+1 from me

@mariusvniekerk
Copy link
Collaborator Author

@mrocklin nudge

@mrocklin mrocklin merged commit 5a77a73 into mrocklin:master Jan 26, 2018
@mrocklin
Copy link
Owner

Thanks @mariusvniekerk

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants