Skip to content

Add data_freshness_sla and volume_threshold tests#932

Open
joostboon wants to merge 3 commits intomasterfrom
feature/volume-threshold-test
Open

Add data_freshness_sla and volume_threshold tests#932
joostboon wants to merge 3 commits intomasterfrom
feature/volume-threshold-test

Conversation

@joostboon
Copy link
Contributor

@joostboon joostboon commented Feb 18, 2026

Summary

  • data_freshness_sla: New test that verifies data was updated before a configurable SLA deadline time, with timezone support and day-of-week/month scheduling.
  • volume_threshold: New test that monitors row count changes with configurable warn/error percentage thresholds. Uses Elementary's metric caching infrastructure (data_monitoring_metrics) to avoid recalculating row counts for buckets that have already been computed.

Code Changes

  • macros/edr/tests/test_data_freshness_sla.sql — New SLA-based freshness test with timezone-aware deadline checking
  • macros/edr/tests/test_volume_threshold.sql — New volume monitoring test with multi-severity thresholds (warn at 5%, error at 10% by default), leveraging incremental metric collection via get_metric_buckets_min_and_max and store_metrics_table_in_cache

Breaking Downstream Impacts

None — both files are net-new additions with no changes to existing models or columns.

Merge Status

Safe to merge. No breaking changes.

Test plan

  • Verify data_freshness_sla compiles correctly against a model with a timestamp column
  • Verify volume_threshold compiles and stores metrics to data_monitoring_metrics
  • Confirm incremental behavior: second run should skip already-cached buckets
  • Test warn vs error severity levels with different threshold configurations
  • Test direction parameter: both, spike, drop

Made with Cursor

Summary by CodeRabbit

New Features

  • Added data freshness SLA test to validate model data updates meet daily deadlines with configurable timezone and scheduling options.
  • Added volume threshold test to monitor row count changes and alert on significant variations with customizable thresholds and comparison modes.

@github-actions
Copy link
Contributor

👋 @joostboon
Thank you for raising your pull request.
Please make sure to add tests and document all user-facing changes.
You can do this by editing the docs files in the elementary repository.

@coderabbitai
Copy link

coderabbitai bot commented Feb 18, 2026

📝 Walkthrough

Walkthrough

Two new EDR test macros are introduced: data_freshness_sla validates that model data updates meet configurable SLA deadlines with timezone support and scheduling flexibility, while volume_threshold monitors row count changes across time buckets against warn/error thresholds using Elementary's metric caching.

Changes

Cohort / File(s) Summary
Data Freshness SLA Testing
macros/edr/tests/test_data_freshness_sla.sql
Introduces data_freshness_sla test macro to validate model data freshness against UTC-based SLA deadlines. Includes parameter validation, timezone normalization, scheduling filters (day_of_week, day_of_month), and helper macro get_data_freshness_sla_query that compares max timestamp against deadline and generates descriptive failure messages with status categorization (NO_DATA, DATA_STALE, DATA_FRESH).
Volume Threshold Testing
macros/edr/tests/test_volume_threshold.sql
Introduces volume_threshold test macro to track row count changes across configurable time buckets. Aggregates metrics from Elementary's data_monitoring_metrics and test_metrics tables, applies deduplication logic, and delegates to helper macro get_volume_threshold_comparison_query that computes percent changes, applies severity thresholds (warn/error), and supports directional analysis (spike, drop, both).

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~40 minutes

Poem

🐰 Hop along, dear metrics bright,
Freshness checked from morn to night,
Volumes tallied, thresholds met,
EDR's finest safety net!
Data flows through warren's test,
Quality assured at its best!

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The pull request title clearly and concisely summarizes the main change: adding two new Elementary tests (data_freshness_sla and volume_threshold), which aligns directly with the file additions and objectives.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feature/volume-threshold-test

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@joostboon joostboon force-pushed the feature/volume-threshold-test branch from c133ff1 to c31987f Compare February 18, 2026 15:19
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (3)
macros/edr/tests/test_volume_threshold.sql (2)

228-234: result_description will be NULL when percent_change is NULL.

When previous_row_count is 0 or NULL, percent_change is NULL (line 193). The || concatenation on lines 232–234 will then produce a NULL result_description in most SQL engines (Snowflake, Postgres, Redshift). This row still gets emitted if severity_level > 0, but that can't happen here because line 206 returns 0 when percent_change is null. So this is safe in practice — just noting the coupling: if the severity logic ever changes to flag null percent_change, the description silently disappears.

♻️ Optional: add a COALESCE for defensive description
-        'Row count changed by ' || cast(percent_change as {{ elementary.edr_type_string() }}) || 
-        '% (from ' || cast({{ elementary.edr_cast_as_int('previous_row_count') }} as {{ elementary.edr_type_string() }}) || 
-        ' to ' || cast({{ elementary.edr_cast_as_int('current_row_count') }} as {{ elementary.edr_type_string() }}) || ')' as result_description
+        case 
+            when percent_change is null then 'Row count: ' || cast({{ elementary.edr_cast_as_int('current_row_count') }} as {{ elementary.edr_type_string() }}) || ' (no prior baseline)'
+            else 'Row count changed by ' || cast(percent_change as {{ elementary.edr_type_string() }}) || 
+                 '% (from ' || cast({{ elementary.edr_cast_as_int('previous_row_count') }} as {{ elementary.edr_type_string() }}) || 
+                 ' to ' || cast({{ elementary.edr_cast_as_int('current_row_count') }} as {{ elementary.edr_type_string() }}) || ')'
+        end as result_description
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@macros/edr/tests/test_volume_threshold.sql` around lines 228 - 234, The
concatenation for result_description can become NULL when percent_change is
NULL; update the expression that builds result_description (referencing
result_description, percent_change, previous_row_count, current_row_count and
the helper macros elementary.edr_cast_as_int and elementary.edr_type_string) to
defensively coalesce nulls before concatenation (e.g., COALESCE(percent_change,
'N/A') and COALESCE(cast(...), '0' or 'N/A') or wrap the whole concatenation
with COALESCE(..., '<no change info>')) so the description never yields NULL
even if percent_change or the counts are NULL.

63-77: Hardcoded days_back=14 limits the historical baseline for comparison.

The get_metric_buckets_min_and_max call uses days_back=14, which means only 14 days of history are considered. For weekly or monthly granularity buckets this may be too narrow to have a meaningful previous bucket. Consider either making this configurable or deriving it from time_bucket.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@macros/edr/tests/test_volume_threshold.sql` around lines 63 - 77, The call to
elementary.get_metric_buckets_min_and_max currently hardcodes days_back=14 which
is too narrow for weekly/monthly buckets; update the macro to accept a
configurable days_back parameter (or compute it from the provided time_bucket in
metric_properties) and pass that variable into get_metric_buckets_min_and_max
instead of 14; locate the call site using get_metric_buckets_min_and_max, the
surrounding metric_properties and time_bucket variables, and either add a new
macro argument (e.g., days_back) or derive days_back from
time_bucket.period/count (e.g., multiply count by a suitable multiplier) so
longer bucket granularities use a larger history window.
macros/edr/tests/test_data_freshness_sla.sql (1)

198-207: Confusing dead-code pattern for is_failure logic — use a clearer Jinja branch instead.

The current approach renders SQL WHEN clauses that are logically dead (when not TRUE then false never matches) or always-true (when not FALSE then false always matches). While functionally correct, this is very hard to reason about for future maintainers.

♻️ Proposed simplification
             case
                 when freshness_status = 'DATA_FRESH' then false
-                {# If deadline hasn't passed, don't fail yet #}
-                {% if deadline_passed %}
-                when not TRUE then false
-                {% else %}
-                when not FALSE then false
-                {% endif %}
+                {% if not deadline_passed %}
+                {# Deadline hasn't passed yet - don't fail regardless of status #}
+                when true then false
+                {% endif %}
                 else true
             end as is_failure,

When deadline_passed is false, this renders when true then false which catches everything after the DATA_FRESH check and returns pass. When deadline_passed is true, no extra clause is emitted, so non-fresh statuses fall through to else true (fail).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@macros/edr/tests/test_data_freshness_sla.sql` around lines 198 - 207, The
is_failure CASE for freshness_status uses confusing dead WHEN clauses; update
the Jinja around the CASE in macros/edr/tests/test_data_freshness_sla.sql so
that you emit a clear branch depending on the deadline_passed flag: keep the
initial "when freshness_status = 'DATA_FRESH' then false" check, and if
deadline_passed is false emit an explicit "when true then false" clause (so
non-fresh rows do not fail), but if deadline_passed is true emit no additional
WHEN clause so non-fresh statuses fall through to "else true" (fail). Locate the
CASE building around is_failure to implement this simpler Jinja branching using
the deadline_passed variable.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@macros/edr/tests/test_data_freshness_sla.sql`:
- Around line 176-184: The freshness_status logic classifies future-dated rows
as DATA_STALE because mdt.max_timestamp_utc > sd.target_date_end_utc falls
through; update the CASE in the macro to handle future timestamps by adding a
condition that treats mdt.max_timestamp_utc > sd.target_date_end_utc as
'DATA_FRESH' (placed before the existing DATA_FRESH when that checks the
window), or alternatively widen the DATA_FRESH check to only require
mdt.max_timestamp_utc >= sd.target_date_start_utc (removing the upper bound);
reference mdt.max_timestamp_utc, sd.target_date_start_utc,
sd.target_date_end_utc and the freshness_status CASE when applying the change.

In `@macros/edr/tests/test_volume_threshold.sql`:
- Around line 153-166: The deduplication using row_number() in
deduplicated_metrics is nondeterministic because it partitions and orders by the
same bucket_end; update the dedup logic to add a deterministic tiebreaker so
freshly computed metrics win: add a precedence column to the source rows (e.g.,
0 for fresh, 1 for cached) or include created_at (or both) and change
row_number() to partition by bucket_end and order by precedence asc, created_at
desc (or equivalent) so deduplicated_metrics and the metrics CTE
deterministically keep the desired row.

---

Nitpick comments:
In `@macros/edr/tests/test_data_freshness_sla.sql`:
- Around line 198-207: The is_failure CASE for freshness_status uses confusing
dead WHEN clauses; update the Jinja around the CASE in
macros/edr/tests/test_data_freshness_sla.sql so that you emit a clear branch
depending on the deadline_passed flag: keep the initial "when freshness_status =
'DATA_FRESH' then false" check, and if deadline_passed is false emit an explicit
"when true then false" clause (so non-fresh rows do not fail), but if
deadline_passed is true emit no additional WHEN clause so non-fresh statuses
fall through to "else true" (fail). Locate the CASE building around is_failure
to implement this simpler Jinja branching using the deadline_passed variable.

In `@macros/edr/tests/test_volume_threshold.sql`:
- Around line 228-234: The concatenation for result_description can become NULL
when percent_change is NULL; update the expression that builds
result_description (referencing result_description, percent_change,
previous_row_count, current_row_count and the helper macros
elementary.edr_cast_as_int and elementary.edr_type_string) to defensively
coalesce nulls before concatenation (e.g., COALESCE(percent_change, 'N/A') and
COALESCE(cast(...), '0' or 'N/A') or wrap the whole concatenation with
COALESCE(..., '<no change info>')) so the description never yields NULL even if
percent_change or the counts are NULL.
- Around line 63-77: The call to elementary.get_metric_buckets_min_and_max
currently hardcodes days_back=14 which is too narrow for weekly/monthly buckets;
update the macro to accept a configurable days_back parameter (or compute it
from the provided time_bucket in metric_properties) and pass that variable into
get_metric_buckets_min_and_max instead of 14; locate the call site using
get_metric_buckets_min_and_max, the surrounding metric_properties and
time_bucket variables, and either add a new macro argument (e.g., days_back) or
derive days_back from time_bucket.period/count (e.g., multiply count by a
suitable multiplier) so longer bucket granularities use a larger history window.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
macros/edr/tests/test_data_freshness_sla.sql (1)

198-207: The is_failure logic is correct but needlessly obfuscated.

When deadline_passed is True, the rendered SQL includes when not TRUE then false — a branch that can never match (dead code). When False, it renders when not FALSE then false — a catch-all that always matches. Both work, but anyone reading the generated SQL will be confused by these tautological / contradictory conditions.

A clearer alternative:

♻️ Suggested refactor
             case
                 when freshness_status = 'DATA_FRESH' then false
-                {# If deadline hasn't passed, don't fail yet #}
-                {% if deadline_passed %}
-                when not TRUE then false
-                {% else %}
-                when not FALSE then false
-                {% endif %}
-                else true
+                {% if not deadline_passed %}
+                {# Deadline hasn't passed yet — don't fail regardless of status #}
+                else false
+                {% else %}
+                {# Deadline has passed — fail if data is not fresh #}
+                else true
+                {% endif %}
             end as is_failure,

This produces the same behavior but the generated SQL is immediately understandable: either else false (always pass) or else true (fail on non-fresh).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@macros/edr/tests/test_data_freshness_sla.sql` around lines 198 - 207, The
is_failure CASE is needlessly obfuscated by injecting tautological WHENs via the
deadline_passed Jinja branch; simplify by making the macro render a
straightforward CASE for is_failure that does not emit impossible or always-true
WHENs: change the template around the CASE producing is_failure so that when
deadline_passed is true the CASE returns "when freshness_status = 'DATA_FRESH'
then false else true" (fail on non-fresh) and when deadline_passed is false it
returns "when freshness_status = 'DATA_FRESH' then false else false" (always
pass), replacing the current conditional WHEN not TRUE/WHEN not FALSE logic in
the is_failure generation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@macros/edr/tests/test_data_freshness_sla.sql`:
- Around line 208-214: The NO_DATA branch of result_description concatenates
where_expression directly into a single-quoted SQL string, which breaks when
where_expression contains single quotes; update the Jinja template in the
NO_DATA branch to escape single quotes before concatenation (e.g., use the Jinja
replace filter on where_expression to double single-quotes) so
formatted_sla_time, timezone and model_relation.identifier are preserved and the
generated SQL remains valid.

---

Duplicate comments:
In `@macros/edr/tests/test_data_freshness_sla.sql`:
- Around line 160-168: The alias max_timestamp_utc in the max_data_timestamp CTE
is misleading because elementary.edr_cast_as_timestamp(timestamp_column) only
casts and does not convert to UTC; update the CTE to either (a) perform an
explicit timezone-normalization to UTC after casting (using your project's
standard timezone conversion helper) so the value truly is UTC when compared to
target_date_start_utc / target_date_end_utc, or (b) if you do not want to
convert here, rename the alias to something accurate (e.g., max_timestamp) and
adjust downstream references that expect a UTC value (involving
max_data_timestamp, max_timestamp_utc, and comparisons to
target_date_start_utc/target_date_end_utc) to either convert to UTC at
comparison time or use the normalized value. Ensure you update all usages of
max_timestamp_utc to the new name or to use the converted UTC timestamp
consistently.
- Around line 176-184: The current CASE in freshness_status misses future-dated
rows because mdt.max_timestamp_utc > sd.target_date_end_utc falls through to the
final else and is marked 'DATA_STALE'; update the CASE to add an explicit branch
for future timestamps (e.g., add "when mdt.max_timestamp_utc >
sd.target_date_end_utc then 'DATA_FRESH'" or, if you need to track them
separately, "then 'DATA_FUTURE'") before the NO_DATA and else branches so
mdt.max_timestamp_utc values after sd.target_date_end_utc are handled correctly;
locate the CASE that produces freshness_status and modify the conditions around
mdt.max_timestamp_utc, sd.target_date_start_utc, and sd.target_date_end_utc
accordingly.

---

Nitpick comments:
In `@macros/edr/tests/test_data_freshness_sla.sql`:
- Around line 198-207: The is_failure CASE is needlessly obfuscated by injecting
tautological WHENs via the deadline_passed Jinja branch; simplify by making the
macro render a straightforward CASE for is_failure that does not emit impossible
or always-true WHENs: change the template around the CASE producing is_failure
so that when deadline_passed is true the CASE returns "when freshness_status =
'DATA_FRESH' then false else true" (fail on non-fresh) and when deadline_passed
is false it returns "when freshness_status = 'DATA_FRESH' then false else false"
(always pass), replacing the current conditional WHEN not TRUE/WHEN not FALSE
logic in the is_failure generation.

@joostboon joostboon force-pushed the feature/volume-threshold-test branch 2 times, most recently from 4600b5c to 11fe645 Compare February 18, 2026 15:28
Add two new Elementary tests:
- data_freshness_sla: checks if data was updated before a specified SLA deadline
- volume_threshold: monitors row count changes with configurable warn/error
  thresholds, using Elementary's metric caching to avoid redundant computation

Fixes applied:
- volume_threshold: union historical metrics with new metrics for comparison
- volume_threshold: deterministic dedup with source_priority tiebreaker
- volume_threshold: let get_time_bucket handle defaults
- data_freshness_sla: treat future-dated data as fresh (remove upper bound)
- data_freshness_sla: escape single quotes in where_expression
- data_freshness_sla: simplify deadline_passed logic
- data_freshness_sla: rename max_timestamp_utc to max_timestamp (no UTC conversion)
- data_freshness_sla: fix macro comment to match actual behavior
- data_freshness_sla: document UTC assumption, add ephemeral model check

Co-authored-by: Cursor <cursoragent@cursor.com>
@joostboon joostboon force-pushed the feature/volume-threshold-test branch from 11fe645 to a51ff11 Compare February 18, 2026 15:33
@joostboon
Copy link
Contributor Author

@coderabbitai review

@coderabbitai
Copy link

coderabbitai bot commented Feb 18, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@joostboon
Copy link
Contributor Author

All review comments have been addressed in commit a51ff11. Summary of fixes:

1. max_timestamp_utc misleading alias (data_freshness_sla)

  • Renamed to `max_timestamp` since no UTC conversion happens
  • Updated macro comment to describe actual behavior
  • UTC assumption documented in test docstring (lines 58-61)

2. Future-dated data classified as DATA_STALE (data_freshness_sla)

  • Widened DATA_FRESH check to `>= target_date_start_utc` (removed upper bound)
  • Pre-loaded/future-timestamped data now correctly counts as fresh

3. Non-deterministic deduplication (volume_threshold)

  • Added `source_priority` column (0 = freshly computed, 1 = cached)
  • `order by source_priority asc` so new metrics deterministically win

4. SQL syntax error with single quotes in where_expression (data_freshness_sla)

  • Added `| replace("'", "''")` to escape single quotes before concatenation

joostboon added a commit to elementary-data/elementary that referenced this pull request Feb 18, 2026
Add docs for two new Elementary tests:
- data_freshness_sla: verifies data was updated before an SLA deadline
- volume_threshold: monitors row count changes with warn/error thresholds

Refs: elementary-data/dbt-data-reliability#912, elementary-data/dbt-data-reliability#932

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
joostboon and others added 2 commits February 18, 2026 21:11
…gQuery test_seed_run_results)

Co-authored-by: Cursor <cursoragent@cursor.com>
…alse (BigQuery test_seed_run_results)"

This reverts commit 9fc552e.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
macros/edr/tests/test_volume_threshold.sql (1)

179-193: result_description can produce NULL concatenation when previous_row_count is NULL.

Although the where severity_level > 0 filter (line 193) should exclude rows with a NULL previous_row_count (severity = 0 per line 162), the result_description on lines 189–191 unconditionally concatenates percent_change, previous_row_count, and current_row_count. If a future code change relaxes the severity filter, the concatenation with NULLs would produce NULL on most SQL engines (in standard SQL, 'text' || NULLNULL). Low risk given the current filter, but worth noting for maintainability.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@macros/edr/tests/test_volume_threshold.sql` around lines 179 - 193, The
result_description concatenation can yield NULL if any of percent_change,
previous_row_count, or current_row_count are NULL; update the expression in the
SELECT for result_description to guard each interpolated value with COALESCE (or
an equivalent null-safe conversion) before casting/concatenating so it always
produces a string (e.g., COALESCE(cast(previous_row_count as {{
elementary.edr_type_string() }}), 'NULL') etc. Modify the result_description
construction in the query (referencing result_description, percent_change,
previous_row_count, current_row_count and the casts using
elementary.edr_type_string()) so it is null-safe even if the severity_level
filter is relaxed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@macros/edr/tests/test_data_freshness_sla.sql`:
- Around line 180-193: The freshness check in the freshness_result CTE was
widened to treat future-dated rows as fresh by using mdt.max_timestamp >=
sd.target_date_start_utc (removing any upper bound), which is correct; no code
change required — leave the condition in freshness_result as-is, ensure the case
branches (NO_DATA / DATA_FRESH / DATA_STALE) remain unchanged and that
downstream references to freshness_result continue to expect this behavior
(e.g., any assertions in tests that reference freshness_status,
mdt.max_timestamp, sla_deadline/sd.target_date_start_utc).
- Around line 212-218: The existing single-quote escaping for where_expression
is correct — keep the replacement expression {{ where_expression | replace("'",
"''") }} inside the NO_DATA message construction (the CASE branch that builds
the 'No data found in "{{ model_relation.identifier }}"' string) as-is; no code
changes needed here, ensure the template continues to use replace("'", "''")
when interpolating where_expression into the SQL string literal.

In `@macros/edr/tests/test_volume_threshold.sql`:
- Around line 104-128: The nondeterminism is resolved by adding source_priority
and deterministic ordering: ensure in all_metrics that source_priority is set to
1 for the select from data_monitoring_metrics_table and 0 for the select from
test_metrics (as shown), that ranked_metrics partitions by bucket_start,
bucket_end and orders by source_priority asc (so fresh metrics win), and that
metrics selects rn = 1; also confirm the metric_properties comparison uses the
same normalized JSON produced by dict_to_quoted_json and that
full_table_name/metric_name casing normalization (upper/lower) is consistent
across the queries (references: all_metrics, ranked_metrics, metrics,
source_priority, dict_to_quoted_json, full_table_name, metric_name).

---

Nitpick comments:
In `@macros/edr/tests/test_volume_threshold.sql`:
- Around line 179-193: The result_description concatenation can yield NULL if
any of percent_change, previous_row_count, or current_row_count are NULL; update
the expression in the SELECT for result_description to guard each interpolated
value with COALESCE (or an equivalent null-safe conversion) before
casting/concatenating so it always produces a string (e.g.,
COALESCE(cast(previous_row_count as {{ elementary.edr_type_string() }}), 'NULL')
etc. Modify the result_description construction in the query (referencing
result_description, percent_change, previous_row_count, current_row_count and
the casts using elementary.edr_type_string()) so it is null-safe even if the
severity_level filter is relaxed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Comments