-
-
Notifications
You must be signed in to change notification settings - Fork 3k
Account for changes in fixture dependencies properly #14104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
@RonnyPfannschmidt please review 😃 |
RonnyPfannschmidt
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm currently not able to do a deep review
This has a few details that give me headaches
- it sidesteps setup/teardown minimizing and im not sure if its feasible to integrate in a comprehensible manner
- its unclear to me what happens if the involved fixtures are indirect to a test
- Its unclear to me what happens if someone creates a cycle via the new dependency tracking
Are you wondering if fixing #14095 is an overall positive change? If so, I can remove that part from the PR for now, and it can be investigated later separately. There is also a minor issue with my implementation. When I do request._execute_fixturedef(fixturedef, only_maintain_cache=True)it may I can change it so that the other fixtures are not
What do you mean? Parametrized / non-parametrized case? In the parametrized case, we cannot get the parametrized fixture using
The initial cache handling step uses There is |
|
trigger a rebuild is more correct than not with indirect i mean what happens if a replaced fixture is not a dependency of the directly used fixtures, but rather down in the dependency tree with loops i mean what actually happens when one fixture goads another ones closure into calling getfixturevalue for itself |
|
I've found another issue in my implementation. Suppose we depend on some fixture dynamically. After a recomputation we stop depending on it. Everything is fine, but at some point that fixture gets finalized, and because we did One solution is to add "evaluation epoch" to the finalizers to avoid finalizing a more recent evaluation of the fixture than expected. |
|
teardown should idieally only happen in teardown phases - thats why the teardown_towards heleprs got added we should fail rather than teardown in setup/call phases |
|
I've moved parametrized fixture finalization to teardown phase. In We need to be cautious with what to teardown when. Consider these tests: @pytest.fixture(scope="session")
def foo(request):
return request.param
@pytest.mark.parametrize("foo", [1], indirect=True)
def test_a(foo):
pass
@pytest.mark.parametrize("foo", [1], indirect=True)
def test_b(foo):
pass
def test_c():
pass
@pytest.mark.parametrize("foo", [2], indirect=True)
def test_d(foo):
passFinalizing For each parametrized fixture in the current item we should find the next item that uses the fixture with that parameter and if parameters differ then finalize the fixture in the current item's teardown. This requires some code restructuring, because we now require arbitrary lookahead instead of just Overall, setup-plan did get prettier. A hefty chunk of I need to write more tests, for issues you pointed out and some more:
|
25fab5f to
643afdf
Compare
|
I've reworked More importantly, this disallows
The implementations of Alternatively, we could avoid all of those troubles by dropping support for carrying fixtures along tests that do not depend on them: @pytest.fixture(scope="session")
def foo(request):
return request.param
@pytest.parametrize("foo", [1], indirect=True)
def test_a(foo):
pass
def test_b():
assert 2 + 2 == 4
@pytest.parametrize("foo", [1], indirect=True)
def test_c(foo):
passIn this case, pytest auto-reorders fixtures based on param values (if reordering is not turned off by the user), but in a more complex scenario additional teardown-setup work will surface: @pytest.fixture(scope="session")
def foo():
pass
@pytest.fixture(scope="session")
def bar():
pass
@pytest.fixture(scope="session")
def baz():
pass
@pytest.parametrize(("foo", "bar"), [1, 1], indirect=True)
def test_a(foo, bar):
pass
@pytest.parametrize(("bar, "baz"), [1, 1], indirect=True)
def test_b(bar, baz):
pass
@pytest.parametrize(("baz, "foo"), [1, 1], indirect=True)
def test_c(bar, baz):
passCurrently all those fixtures are carried over. They will not be carried over. And so I come back to the "disgusting" idea of tearing down parametrized fixtures in the test before the parameter changes. So in this test setup, |
|
Done! I should add the following tests (besides the ones mentioned earlier in one, two):
|
|
By the way, this PR also relates to:
|
|
@RonnyPfannschmidt what do you think is the lesser evil in this case? import pytest
@pytest.fixture(scope="session")
def foo(request):
return getattr(request, "param", None)
@pytest.mark.parametrize("foo", [1], indirect=True)
def test_a(foo):
assert foo == 1
def test_b(request):
hmmm(request)
@pytest.mark.parametrize("foo", [1], indirect=True)
def test_c(foo):
assert foo == 1
* Assume that parametrized tests autoreordering is turned off, or that the user forced a custom order, or that there are multiple parameters, and autoreordering had to pick an argname to not be consecutive, and our fixture was picked. UPD: I will go with option 1 for now, because in most cases extra setup-teardown won't happen due to autoreordering, and worst-case scenario is tests slowdown, while option 2 risks breaking some tests in-the-wild (I'm almost certain that I'll find out at my worksite that the new version of pytest broke some of the millions of pytest tests.) |
|
teardown when the next item doesnt use is worse than failure at first glance the failure is a acceptable first step as it turns a silent error into a visible one, but the real issue is that teardown_towards should tear it down in the teardown of test_b as test_c has indeed different parameters |
There is no issue there:
The issue is only in |
|
But I got the idea, you think failing explicitly is better than the current state of teardown in setup phase. Will go on with this approach. |
|
the goal should be to eventually e able to do it in teardown_towards but bascially if a fixture has to be removed in the setup phase something went wrong i do fear that i am missing some edge cases here where the teardown is correct but my current understanding is that the teardown ought to happen in the teardown phase of the prior test item |
|
@RonnyPfannschmidt ready for review |
|
Could anyone review this please? |
RonnyPfannschmidt
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i just did a preliminary skim of the change
the finalizer handles are a interesting way to sort the dependencies out, unfortunately it creates a very global way to sort it
i haven’t done my own die diligence yet, i hope that there’s potentially a sensible way to integrate the detail handling of this with setupstate,
after my first iteration im getting the impression it should be integrated into finalizers in a more direct manner
this is quite the tricky problem, in particular wit the current architecture
i am of the impression that a missing underlying refactoring/feature addition is making the task you set out to solve very hard
we have to be very careful about the effect of the tech debt incurred by this
src/_pytest/fixtures.py
Outdated
| node.ihook.pytest_fixture_post_finalizer(fixturedef=self, request=request) | ||
| try: | ||
| node.ihook.pytest_fixture_post_finalizer(fixturedef=self, request=request) | ||
| except BaseException as e: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is one of the hooks that should never error
if we hide away base-exception here we break quite a bit of the ux
so we absolutely need to handle this hook raising a outcome or a baseexception thats not a normal exception
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed to only catching TEST_OUTCOME, like in SetupState.teardown_exact
src/_pytest/fixtures.py
Outdated
| request._fixturemanager._on_parametrized_fixture_finished(self) | ||
| self._self_finalizer = None | ||
| # Avoid accumulating garbage finalizers in nodes and fixturedefs (#4871). | ||
| for handle in self._self_finalizer_handles: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the handles are something i need to dig in deeper, that layer is a bit confusing at first glance
the ideally we only track finalizers in a single way
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've changed all addfinalizer functions to work in a single way. More on handles in the comment below
src/_pytest/nodes.py
Outdated
| __slots__ = () | ||
|
|
||
|
|
||
| FinalizerStorage = dict[_FinalizerId, Callable[[], object]] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this looks like a problem - as far as i can tell it could collect garbage in inprocess tests that break pytest intentional for behaviour checks
its also a global mutable storage - something wed absolutely want to avoid
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you've read this piece of code incorrectly. This is not a global variable, but a type. I haven't added any completely new state, I've added handles to existing finalizer storages in nodes and fixturedefs
src/_pytest/runner.py
Outdated
| fin() | ||
| except TEST_OUTCOME as e: | ||
| these_exceptions.append(e) | ||
| if isinstance(nextitem, Item): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| if isinstance(nextitem, Item): | |
| if nextitem is not None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
|
@RonnyPfannschmidt fixed + replied Why removal handles are needed:
In #4871 they also suggest that ideally we should avoid fixture finalizers and instead plan fixture setup-teardown at collection stage. This is impossible, because, as explained here, custom implementations of An alternative implementation without finalizers would be the following:
Still, I can't see a non-ad-hoc way of avoiding the pile-up of finalizers of parametrized fixtures in nodes. Removal handles as currently in the PR are the most straightforward solution that I see. A solution for nodes without handles would be to remember the node for which we have a finalizer currently registered and not readd it on fixture recomputation. The drawback would be incorrect finalizer ordering: @pytest.fixture(scope="module")
def foo():
...
@pytest.fixture(scope="module")
def bar():
...
@pytest.mark.parametrize("foo", [1], indirect=True)
def test1(foo, bar):
...
@pytest.mark.parametrize("foo", [2], indirect=True)
def test2(bar, foo):
...The order of fixture teardown on But the instance of AFAICS the only way the correct order can be achieved is by removing the finalizers of |
|
ok, i need to reiterate the handles idea a bit more fir my own understaniding i think the fact that its currently a global variable is part of my confusion ideally the complete details on caching/recreation and current fixture closure turn into the responsibility of the setup state but that is sooo out of scope for this change i do believe having the handles as one variable may create possibly neglectable issues wit in process pytest runs for testing pytest/pytest plugins - off hand it looks to me as if we could leave handles in place by accident if the pytest under test gets really broken - i suspect that this could be resolved by having a weakref.WeakKeyDictionary which would automatically clean up |
|
Alternatively, there is the following compromise:
What do you think? Edit: After thinking for a while, I still believe handles are a cleaner solution, because this approach
|
|
I don't think weakref can be used with handles. If we drop a handle, that does not mean that we no longer want the finalizer to run, unless we want to break the current finalizers API for users. What dropping a handle currently means in the PR is that we don't want the ability to remove the finalizer. Edit: actually, a handle could keep a weakref to If handles are kept past their intended scope by mistake, then finalizers are eventually invoked and cleared, and handles are left owning a bunch of |
|
@RonnyPfannschmidt I've moved fixture finalizers management to |
|
Thanks for moving it into setupstate That removes my biggest concern I did another skim and now it looks solid at first glance to me As this necessary is something big ill have to allocate some time for a deeper read Thank you |
FixtureRequestobjectaddfinalizerreturn a handle that allows to remove the finalizer; use it inFixtureDefSetupStateteardown parametrized fixtures, which parameter changes innextitem, at teardown stageCloses: #2043
Closes: #4871
Closes: #5848
Closes: #9287
Closes: #14103
Closes: #14114