You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-**Description:** {unittest-documentation}`unittest<>` is Python's built-in testing framework, inspired by JUnit. Tests are written in classes inheriting from `unittest.TestCase`. {coverage.py}`coverage.py<>` is the standard standalone tool for measuring code coverage.
31
+
-**Description:** {unittest-documentation}`unittest<>` is Python's built-in testing framework, inspired by JUnit. Tests are written in classes inheriting from `unittest.TestCase`. {coverage.py}`coveragepy-documentation<>` is the standard standalone tool for measuring code coverage.
32
32
-**Evaluation:**
33
33
34
34
-**Ease of Use:** Moderate. Requires significant boilerplate (class definitions, inheritance, specific method names, explicit `setUp`/`tearDown` methods). Writing simple tests is more verbose than alternatives.
35
35
-**Feature-Rich:** Moderate. Provides core testing features but lacks the advanced features and extensive plugin ecosystem of {pytest}`pytest<>` (e.g., simple functional fixtures, powerful parametrization decorators built-in).
36
-
-**Performance:** Moderate. Test execution can be slower than {pytest}`pytest<>` for large test suites due to its architecture (creating a class instance per test method). {coverage.py}`coverage.py<>` performance is generally good.
37
-
-**OS Interoperability:** Excellent. Both are foundational Python tools, highly robust across OSs. {unittest-documentation}`unittest<>` is standard library, {coverage.py}`coverage.py<>` is pure Python.
36
+
-**Performance:** Moderate. Test execution can be slower than {pytest}`pytest<>` for large test suites due to its architecture (creating a class instance per test method). {coverage.py}`coveragepy-documentation<>` performance is generally good.
37
+
-**OS Interoperability:** Excellent. Both are foundational Python tools, highly robust across OSs. {unittest-documentation}`unittest<>` is standard library, {coverage.py}`coveragepy-documentation<>` is pure Python.
38
38
-**Integration:** High (Individual). Both have CLIs easily called from Task Automation/CI. Integrating them _together_ requires explicitly wrapping `unittest` execution with `coverage run -m unittest` or using less standardized plugins compared to the {pytest-pytest-cov}`pytest<>` ecosystem. Generating standard reports like JUnit XML also often requires extra steps or third-party runners for {unittest-documentation}`unittest<>`.
-**Maturity & Stability:** Very High. Both are extremely mature, stable, battle-tested.
41
41
-**Community & Documentation:** Very High. Widely adopted, vast documentation.
42
42
@@ -53,13 +53,13 @@ We evaluated the primary testing framework and coverage tools:
53
53
-**OS Interoperability:** Excellent. Pure Python package, works reliably across OSs.
54
54
-**Integration:** Excellent. Widely supported, integrates seamlessly into editors/IDEs, {pre-commit}`pre-commit<>`, Task Automation, CI/CD. Designed for external execution via CLI.
55
55
-**Reporting:** Excellent. Provides clear terminal output. Standard support for generating JUnit XML reports (`--junitxml=...`), which is essential for CI platform integration.
56
-
-**Coverage Reporting:** Poor (Not built-in). Requires an external tool (like {coverage.py}`coverage.py<>`) and integration mechanism.
56
+
-**Coverage Reporting:** Poor (Not built-in). Requires an external tool (like {coverage.py}`coveragepy-documentation<>`) and integration mechanism.
57
57
-**Maturity & Stability:** Very High. Mature, stable, widely adopted standard for modern Python testing.
58
58
-**Community & Documentation:** Very High. Massive, active community, extensive documentation.
59
59
60
60
-**Conclusion:** The de facto standard for modern Python testing. Excels at ease of use, features, performance, and integration for the testing framework itself. Lacks built-in coverage, requiring integration with another tool.
-**Description:** The standard standalone tool for measuring code coverage in Python. Monitors code execution and reports on lines/branches executed.
65
65
-**Evaluation:** (Evaluated primarily as an engine, its integration is key).
@@ -76,11 +76,11 @@ We evaluated the primary testing framework and coverage tools:
76
76
77
77
### Option 4: {pytest-pytest-cov}`pytest-cov<>`
78
78
79
-
-**Description:** The official {pytest}`pytest<>` plugin that integrates {coverage.py}`coverage.py<>` seamlessly into the {pytest}`pytest<>` workflow.
79
+
-**Description:** The official {pytest}`pytest<>` plugin that integrates {coverage.py}`coveragepy-documentation<>` seamlessly into the {pytest}`pytest<>` workflow.
80
80
-**Evaluation:** (Evaluated as the integration bridge).
81
81
82
-
-**Integration with Testing & Coverage:** Excellent. Provides seamless, standard integration by adding `--cov` flags to the `pytest` command. Orchestrates running {coverage.py}`coverage.py<>` around the {pytest-pytest-cov}`pytest<>` run.
83
-
-**Accurate & Detailed Reporting:** Excellent. Leverages {coverage.py}`coverage.py<>`'s full reporting capabilities via {pytest-pytest-cov}`pytest<>` command-line arguments and config files.
82
+
-**Integration with Testing & Coverage:** Excellent. Provides seamless, standard integration by adding `--cov` flags to the `pytest` command. Orchestrates running {coverage.py}`coveragepy-documentation<>` around the {pytest-pytest-cov}`pytest<>` run.
83
+
-**Accurate & Detailed Reporting:** Excellent. Leverages {coverage.py}`coveragepy-documentation<>`'s full reporting capabilities via {pytest-pytest-cov}`pytest<>` command-line arguments and config files.
84
84
-**Performance:** High (Combined). Adds minimal overhead; combined performance is driven by {pytest-pytest-cov}`pytest<>` and {coverage.py}`coveragepy-coverage-documentation<>` execution.
85
85
-**OS Interoperability:** Excellent. Pure Python plugin, inherits compatibility from {pytest-pytest-cov}`pytest<>` and {coverage.py}`coveragepy-coverage-documentation<>`.
86
86
-**Callable for Workflow:** Excellent. Simply adds flags to the standard `pytest` command, easily used in Task Automation and CI.
@@ -104,7 +104,7 @@ We evaluated the primary testing framework and coverage tools:
- Matrix Orchestration (for full matrix): **{nox}`Nox<>`** (invoking {pytest-pytest-cov}`pytest<>` across matrix) or optionally **{tox}`Tox<>`** (invoked by {nox}`Nox<>` for specific needs).
Copy file name to clipboardExpand all lines: docs/topics/13_ci-orchestration.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -46,7 +46,7 @@ We focus on the approach of using standard CI platforms to orchestrate the templ
46
46
-**Checkout:** Standard step to get code.
47
47
-**Environment Setup:** Uses platform actions/steps to set up required Python versions (e.g., `actions/setup-python` on GitHub Actions) and cache dependencies efficiently, which is better handled by the platform than trying to manage these complex caching strategies manually in a simple Task Automation script.
48
48
-**Matrix Testing:** Combines platform matrix capabilities (OS + Python versions) with Nox's ability to run across multiple Python versions (using the Nox `python=` parameter) or specifically configured sessions to cover testing requirements reliably across combinations.
49
-
-**Reporting:** Leverages the platform's ability to collect standard reports (JUnit XML from {pytest}`pytest<>`, Cobertura XML from {coverage.py}`coverage.py<>`) generated by the Task Automation layer.
49
+
-**Reporting:** Leverages the platform's ability to collect standard reports (JUnit XML from {pytest}`pytest<>`, Cobertura XML from {coverage.py}`coveragepy-documentation<>`) generated by the Task Automation layer.
50
50
-**Status Checks:** The platform provides visual feedback (pass/fail) linked to commits/PRs based on job outcomes.
51
51
-**Adaptability:** Excellent. Switching CI platforms involves mapping the checkout, Python setup, caching, secrets, and artifact steps from the old platform to the new one, and then configuring the new platform to call the _same_`nox -s <task>` commands as before. The core Task Automation logic (`noxfile.py`) remains unchanged.
52
52
-**Value of Examples:** Very High. Providing concrete examples for popular platforms (like {github-actions}`GitHub Actions<>`) significantly speeds up user adoption of this strategy by providing a ready-to-use template configuration, demonstrating exactly how to integrate Task Automation calls, setup, matrixing, and reporting.
0 commit comments