Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
61 changes: 61 additions & 0 deletions BRANCH_COVERAGE_GUIDE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
## Add Branch Coverage

### Step 1: Register in `mypy/branch_coverage.py`

```python
BRANCH_COVERAGE = {
'check_return_stmt': set(),
'your_function_name': set(), # Add your function
}

BRANCH_DESCRIPTIONS = {
'your_function_name': {
1: 'Function entry',
2: 'if condition_x - TRUE',
3: 'if condition_x - FALSE',
4: 'elif condition_y - TRUE',
5: 'else branch',
}
}
```

### Step 2: Instrument Your Function

```python
def your_function_name(self, param):
from mypy.branch_coverage import record_branch
record_branch('your_function_name', 1) # Function entry

if condition_x:
record_branch('your_function_name', 2) # TRUE
# code...
elif condition_y:
record_branch('your_function_name', 3) # FALSE from if
record_branch('your_function_name', 4) # TRUE for elif
# code...
else:
record_branch('your_function_name', 3) # FALSE from if
record_branch('your_function_name', 5) # else
# code...
```

**Important:** Import `record_branch` inside the function to avoid circular imports.

## Run Tests

**CRITICAL**: Must use `-n0` to disable parallel execution, or coverage data will not be collected!

```bash
# Activate virtual environment first
source venv/bin/activate

# Run all tests
pytest mypy/test/testcheck.py -n0

# Run specific test file
pytest mypy/test/testcheck.py::TypeCheckSuite::::check-basic.test::testInvalidReturn -n0
```

## View Reports

Reports are automatically saved in the project root directory:`branch_coverage_report.txt`
101 changes: 101 additions & 0 deletions mypy/branch_coverage.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
"""Branch Coverage Tracking Module"""

import json
from pathlib import Path

BRANCH_COVERAGE = {
'check_return_stmt': set()
}

BRANCH_DESCRIPTIONS = {
'check_return_stmt': {
1: 'Function entry',
2: 'defn is not None - TRUE',
3: 'defn is not None - FALSE',
4: 'defn.is_generator - TRUE',
5: 'defn.is_generator - FALSE (else/elif)',
6: 'defn.is_coroutine - TRUE',
7: 'defn.is_coroutine - FALSE (else)',
8: 'isinstance(return_type, UninhabitedType) - TRUE',
9: 'isinstance(return_type, UninhabitedType) - FALSE',
10: 'not is_lambda and not return_type.ambiguous - TRUE',
11: 'not is_lambda and not return_type.ambiguous - FALSE',
12: 's.expr - TRUE (has return value)',
13: 's.expr - FALSE (empty return)',
14: 'isinstance(s.expr, (CallExpr, ...)) or isinstance(s.expr, AwaitExpr) - TRUE',
15: 'isinstance(s.expr, (CallExpr, ...)) or isinstance(s.expr, AwaitExpr) - FALSE',
16: 'isinstance(typ, Instance) and typ.type.fullname in NOT_IMPLEMENTED - TRUE',
17: 'isinstance(typ, Instance) and typ.type.fullname in NOT_IMPLEMENTED - FALSE',
18: 'defn.is_async_generator - TRUE',
19: 'defn.is_async_generator - FALSE',
20: 'isinstance(typ, AnyType) - TRUE',
21: 'isinstance(typ, AnyType) - FALSE',
22: 'warn_return_any conditions - TRUE (all conditions met)',
23: 'warn_return_any conditions - FALSE (at least one condition not met)',
24: 'declared_none_return - TRUE',
25: 'declared_none_return - FALSE',
26: 'is_lambda or isinstance(typ, NoneType) - TRUE',
27: 'is_lambda or isinstance(typ, NoneType) - FALSE',
28: 'defn.is_generator and not defn.is_coroutine and isinstance(return_type, AnyType) - TRUE',
29: 'defn.is_generator and not defn.is_coroutine and isinstance(return_type, AnyType) - FALSE',
30: 'isinstance(return_type, (NoneType, AnyType)) - TRUE',
31: 'isinstance(return_type, (NoneType, AnyType)) - FALSE',
32: 'self.in_checked_function() - TRUE',
33: 'self.in_checked_function() - FALSE',
}
}


def record_branch(function_name, branch_id):

if function_name in BRANCH_COVERAGE:
BRANCH_COVERAGE[function_name].add(branch_id)


def get_coverage_report():

report = []
report.append("=" * 80)
report.append("BRANCH COVERAGE REPORT")
report.append("=" * 80)

for func_name, covered_branches in BRANCH_COVERAGE.items():
report.append(f"\n{'=' * 80}")
report.append(f"Function: {func_name}")
report.append(f"{'=' * 80}")

descriptions = BRANCH_DESCRIPTIONS.get(func_name, {})
total_branches = len(descriptions)
covered_count = len(covered_branches)

report.append(f"Coverage: {covered_count}/{total_branches} branches ({covered_count/total_branches*100:.1f}%)")
report.append("")


for branch_id in sorted(descriptions.keys()):
status = "COVERED" if branch_id in covered_branches else "NOT COVERED"
desc = descriptions[branch_id]
report.append(f" Branch {branch_id:2d}: {status:15s} | {desc}")


uncovered = set(descriptions.keys()) - covered_branches
if uncovered:
report.append("\n" + "=" * 80)
report.append("UNCOVERED BRANCHES:")
report.append("=" * 80)
for branch_id in sorted(uncovered):
report.append(f" Branch {branch_id:2d}: {descriptions[branch_id]}")

report.append("\n" + "=" * 80)
return "\n".join(report)


def save_coverage_report(filename="branch_coverage_report.txt"):

report = get_coverage_report()
output_path = Path.cwd() / filename
with open(output_path, 'w', encoding='utf-8') as f:
f.write(report)
print(f"\nCoverage report saved to: {output_path}")


42 changes: 42 additions & 0 deletions mypy/test/conftest.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
"""
Pytest configuration for branch coverage collection
"""

import pytest


def pytest_sessionfinish(session, exitstatus):
"""
Hook that runs after all tests complete
"""
try:
from mypy.branch_coverage import (
save_coverage_report,
get_coverage_report,
BRANCH_COVERAGE
)

total_covered = sum(len(branches) for branches in BRANCH_COVERAGE.values())

if total_covered > 0:
print("\n" + "=" * 80)
print("BRANCH COVERAGE COLLECTION COMPLETED")
print("=" * 80)
print(f"Total branches covered: {total_covered}")

save_coverage_report()

print("\n" + get_coverage_report())

print("\n" + "=" * 80)
print("Coverage reports saved!")
print("=" * 80)
else:
print("\nWarning: No branch coverage data collected")

except ImportError:
print("\nBranch coverage module not found - skipping coverage report")
except Exception as e:
print(f"\nError saving coverage report: {e}")
import traceback
traceback.print_exc()
103 changes: 103 additions & 0 deletions report.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
# Report for assignment 3

This is a template for your report. You are free to modify it as needed.
It is not required to use markdown for your report either, but the report
has to be delivered in a standard, cross-platform format.

## Project

Name:

URL:

One or two sentences describing it

## Onboarding experience

Did it build and run as documented?

See the assignment for details; if everything works out of the box,
there is no need to write much here. If the first project(s) you picked
ended up being unsuitable, you can describe the "onboarding experience"
for each project, along with reason(s) why you changed to a different one.


## Complexity

1. What are your results for five complex functions?
* Did all methods (tools vs. manual count) get the same result?
* Are the results clear?
2. Are the functions just complex, or also long?
3. What is the purpose of the functions?
4. Are exceptions taken into account in the given measurements?
5. Is the documentation clear w.r.t. all the possible outcomes?

## Refactoring

Plan for refactoring complex code:

Estimated impact of refactoring (lower CC, but other drawbacks?).

Carried out refactoring (optional, P+):

git diff ...

## Coverage

### Tools

Document your experience in using a "new"/different coverage tool.

How well was the tool documented? Was it possible/easy/difficult to
integrate it with your build environment?

### Your own coverage tool

Show a patch (or link to a branch) that shows the instrumented code to
gather coverage measurements.

The patch is probably too long to be copied here, so please add
the git command that is used to obtain the patch instead:

git diff ...

What kinds of constructs does your tool support, and how accurate is
its output?

### Evaluation

1. How detailed is your coverage measurement?

2. What are the limitations of your own tool?

3. Are the results of your tool consistent with existing coverage tools?

## Coverage improvement

Show the comments that describe the requirements for the coverage.

Report of old coverage: [link]

Report of new coverage: [link]

Test cases added:

git diff ...

Number of test cases added: two per team member (P) or at least four (P+).

## Self-assessment: Way of working

Current state according to the Essence standard: ...

Was the self-assessment unanimous? Any doubts about certain items?

How have you improved so far?

Where is potential for improvement?

## Overall experience

What are your main take-aways from this project? What did you learn?

Is there something special you want to mention here?
Loading