Skip to content

Conversation

@avinash2692
Copy link
Contributor

This PR will hope to fix #286 with some additional inputs from @jakelorocco. For now, it:

  • adds some additional garbage collection in conftest.py
  • Skips certain test modules which have a lot of tests marked as qualitative

@mergify
Copy link

mergify bot commented Jan 7, 2026

Merge Protections

Your pull request matches the following merge protections and will not be merged until they are valid.

🟢 Enforce conventional commit

Wonderful, this rule succeeded.

Make sure that we follow https://www.conventionalcommits.org/en/v1.0.0/

  • title ~= ^(fix|feat|docs|style|refactor|perf|test|build|ci|chore|revert|release)(?:\(.+\))?:

@avinash2692 avinash2692 marked this pull request as ready for review January 9, 2026 17:43
hf_model_name="ibm-granite/granite-4.0-micro",
ollama_name="ibm/granite4:micro",
ollama_name="granite4:micro",
openai_name="granite4:micro", # setting this just for testing purposes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this is user facing, I don't think we should set this for testing purposes. We should just explicitly refer to .ollama_name when instantiating the backend/session.

test/conftest.py Outdated
Comment on lines 26 to 70
@pytest.fixture(autouse=True, scope="function")
def aggressive_cleanup():
"""Aggressive memory cleanup after each test to prevent OOM on CI runners."""
yield
# Only run aggressive cleanup in CI where memory is constrained
if int(os.environ.get("CICD", 0)) != 1:
return

# Cleanup after each test
gc.collect()
gc.collect()

# If torch is available, clear CUDA cache
try:
import torch

if torch.cuda.is_available():
torch.cuda.empty_cache()
torch.cuda.synchronize()
except ImportError:
pass


@pytest.fixture(autouse=True, scope="module")
def cleanup_module_fixtures():
"""Cleanup module-scoped fixtures to free memory between test modules."""
yield
# Only run aggressive cleanup in CI where memory is constrained
if int(os.environ.get("CICD", 0)) != 1:
return

# Cleanup after module
gc.collect()
gc.collect()
gc.collect()

# If torch is available, clear CUDA cache
try:
import torch

if torch.cuda.is_available():
torch.cuda.empty_cache()
torch.cuda.synchronize()
except ImportError:
pass
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we extract this logic into a single function and then just create two pytest fixtures that call it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

fix: overhaul tests

3 participants