Skip to content

Commit 16d681d

Browse files
authored
Fix rolling updates flaky tests (#105)
Recently we added a new feature that allowed cluster rolling updates, allowing users to request 1.25->1.29 cluster upgrades. The tests of this feature consisted of creating a 1.27 cluster, patching 1.29 as version and then testing that the cluster goes from 1.27 to 1.28 then 1.29. However we realized that sometimes when checking that the cluster is in 1.28, we find that it's actually 1.29... meaning that the waiter didn't get the chance to find the cluster in a an active_state... This patch reduce the waiter delay to quickly catch the cluster in a active_state Signed-off-by: Amine Hilaly <hilalyamine@gmail.com> By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
1 parent 59a0ab3 commit 16d681d

File tree

1 file changed

+5
-1
lines changed

1 file changed

+5
-1
lines changed

test/e2e/tests/test_cluster.py

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,11 @@
4141
CHECK_STATUS_WAIT_SECONDS = 30
4242

4343
def wait_for_cluster_active(eks_client, cluster_name):
44-
waiter = eks_client.get_waiter('cluster_active')
44+
waiter = eks_client.get_waiter(
45+
'cluster_active',
46+
)
47+
waiter.config.delay = 5
48+
waiter.config.max_attempts = 240
4549
waiter.wait(name=cluster_name)
4650

4751
def get_and_assert_status(ref: k8s.CustomResourceReference, expected_status: str, expected_synced: bool):

0 commit comments

Comments
 (0)