-
Notifications
You must be signed in to change notification settings - Fork 24
[LEADS-182] - Add eval config data to the report #142
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
[LEADS-182] - Add eval config data to the report #142
Conversation
WalkthroughAdds a new CSV column Changes
Sequence Diagram(s)mermaid Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
VladimirKadlec
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work, thank you. Please add this to the documentation.
Other than that LGTM.
asamal4
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you.. LGTM. one nit..
FYI.. Also we will have to refactor metadata mapping to simplify the logic (to remove additional logic in evaluator.py). But not part of this story or PR.
| if self.system_config is not None and hasattr(self.system_config, "output"): | ||
| if hasattr(self.system_config.output, "summary_config_sections"): | ||
| return self.system_config.output.summary_config_sections | ||
| # Default sections if not configured (see system.py:220) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove code line number, that may get change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in fact remove complete comment itself and use a constant; that will be self-explanatory.
Mock system_config and metric_manager now properly support iteration operations to prevent TypeError when production code iterates over model_fields.keys() or metadata.items(). Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
25293d8 to
3ffbb1e
Compare
| "result", | ||
| "score", | ||
| "threshold", | ||
| "metrics_metadata", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please rename the column name
| "metrics_metadata", | |
| "metric_metadata", |
Co-authored-by: Asutosh Samal <asamal@redhat.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@src/lightspeed_evaluation/core/models/system.py`:
- Around line 219-222: Add the missing DEFAULT_STORED_CONFIGS constant to
src/lightspeed_evaluation/core/constants.py (define DEFAULT_STORED_CONFIGS =
["llm", "embedding", "api"], e.g., placed after SUPPORTED_OUTPUT_TYPES) and then
import DEFAULT_STORED_CONFIGS into
src/lightspeed_evaluation/core/models/system.py so the Field default for
summary_config_sections uses that constant; ensure the import statement includes
DEFAULT_STORED_CONFIGS and no other changes to the Field definition in the
System model.
| summary_config_sections: list[str] = Field( | ||
| default=DEFAULT_STORED_CONFIGS, | ||
| description="Configuration sections to include in summary reports", | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for DEFAULT_STORED_CONFIGS definition in the constants module
rg -n "DEFAULT_STORED_CONFIGS" --type pyRepository: lightspeed-core/lightspeed-evaluation
Length of output: 172
🏁 Script executed:
#!/bin/bash
# Check the import block in system.py
sed -n '8,37p' src/lightspeed_evaluation/core/models/system.py
echo "---"
# Check what constants are defined in the constants module
rg -n "^DEFAULT_" src/lightspeed_evaluation/core/constants.py | head -30Repository: lightspeed-core/lightspeed-evaluation
Length of output: 1954
🏁 Script executed:
#!/bin/bash
# Search specifically for DEFAULT_STORED_CONFIGS in the constants module
rg -n "DEFAULT_STORED_CONFIGS" src/lightspeed_evaluation/core/constants.py
echo "---"
# Also check if it might be defined with a different name
rg -n "STORED_CONFIGS|stored.*config" src/lightspeed_evaluation/core/constants.py -iRepository: lightspeed-core/lightspeed-evaluation
Length of output: 84
🏁 Script executed:
#!/bin/bash
# Read the actual lines 219-222 with context
sed -n '210,230p' src/lightspeed_evaluation/core/models/system.py
echo "---"
# Check the class definition to understand context better
sed -n '200,240p' src/lightspeed_evaluation/core/models/system.pyRepository: lightspeed-core/lightspeed-evaluation
Length of output: 2315
🏁 Script executed:
#!/bin/bash
# Search for any config-related constants in the entire codebase
rg -n "config.*sections|stored.*config" src/lightspeed_evaluation/core/constants.py -i
echo "---"
# Check if there's a SUPPORTED_* constant that might be appropriate
rg -n "SUPPORTED_" src/lightspeed_evaluation/core/constants.pyRepository: lightspeed-core/lightspeed-evaluation
Length of output: 411
🏁 Script executed:
#!/bin/bash
# Check if DEFAULT_STORED_CONFIGS might be defined elsewhere in the codebase
rg -rn "DEFAULT_STORED_CONFIGS\s*=" src/lightspeed_evaluation/
echo "---"
# Let's also check the entire constants file length to see how much we might have missed
wc -l src/lightspeed_evaluation/core/constants.py
echo "---"
# Search the entire constants file for any mention
cat src/lightspeed_evaluation/core/constants.py | grep -i "stored\|config"Repository: lightspeed-core/lightspeed-evaluation
Length of output: 136
🏁 Script executed:
#!/bin/bash
# Let's read the entire constants file to see what's available
cat src/lightspeed_evaluation/core/constants.py
echo "---"
# Also check if there are any comments or documentation about config sections
rg -n "config.*section|summary.*config" src/lightspeed_evaluation/ -i -A 2 -B 2Repository: lightspeed-core/lightspeed-evaluation
Length of output: 7764
Define DEFAULT_STORED_CONFIGS constant before importing it.
The constant DEFAULT_STORED_CONFIGS is used as the default value at line 220 but does not exist in lightspeed_evaluation.core.constants. The correct default should be ["llm", "embedding", "api"] (as documented in src/lightspeed_evaluation/core/output/generator.py:472-473).
Add the constant definition to src/lightspeed_evaluation/core/constants.py, then import it in system.py:
🔧 Proposed fix: Define and import `DEFAULT_STORED_CONFIGS`
- Add to
src/lightspeed_evaluation/core/constants.py(around line 77, afterSUPPORTED_OUTPUT_TYPES):
SUPPORTED_OUTPUT_TYPES = ["csv", "json", "txt"]
+DEFAULT_STORED_CONFIGS = ["llm", "embedding", "api"]
SUPPORTED_CSV_COLUMNS = [- Add to import statement in
src/lightspeed_evaluation/core/models/system.py(lines 8-37):
from lightspeed_evaluation.core.constants import (
DEFAULT_API_BASE,
DEFAULT_API_CACHE_DIR,
DEFAULT_API_TIMEOUT,
DEFAULT_API_VERSION,
DEFAULT_BASE_FILENAME,
DEFAULT_EMBEDDING_CACHE_DIR,
DEFAULT_EMBEDDING_MODEL,
DEFAULT_EMBEDDING_PROVIDER,
DEFAULT_ENDPOINT_TYPE,
DEFAULT_LLM_CACHE_DIR,
DEFAULT_LLM_MAX_TOKENS,
DEFAULT_LLM_MODEL,
DEFAULT_LLM_PROVIDER,
DEFAULT_SSL_VERIFY,
DEFAULT_SSL_CERT_FILE,
DEFAULT_LLM_RETRIES,
DEFAULT_LLM_TEMPERATURE,
DEFAULT_LOG_FORMAT,
DEFAULT_LOG_PACKAGE_LEVEL,
DEFAULT_LOG_SHOW_TIMESTAMPS,
DEFAULT_LOG_SOURCE_LEVEL,
DEFAULT_OUTPUT_DIR,
+ DEFAULT_STORED_CONFIGS,
DEFAULT_VISUALIZATION_DPI,
DEFAULT_VISUALIZATION_FIGSIZE,
SUPPORTED_CSV_COLUMNS,
SUPPORTED_ENDPOINT_TYPES,
SUPPORTED_GRAPH_TYPES,
SUPPORTED_OUTPUT_TYPES,
)🧰 Tools
🪛 GitHub Actions: Pyright
[error] 220-220: pyright: 'DEFAULT_STORED_CONFIGS' is not defined. (reportUndefinedVariable). Command failed: make pyright
🪛 GitHub Actions: Python linter
[error] 220-220: pylint: Undefined variable 'DEFAULT_STORED_CONFIGS'.
🪛 GitHub Actions: Ruff
[error] 220-220: F821 Undefined name 'DEFAULT_STORED_CONFIGS' (ruff: Undefined name).
🪛 GitHub Actions: Type checks
[error] 220-220: Mypy error: Name 'DEFAULT_STORED_CONFIGS' is not defined. [name-defined]
🤖 Prompt for AI Agents
In `@src/lightspeed_evaluation/core/models/system.py` around lines 219 - 222, Add
the missing DEFAULT_STORED_CONFIGS constant to
src/lightspeed_evaluation/core/constants.py (define DEFAULT_STORED_CONFIGS =
["llm", "embedding", "api"], e.g., placed after SUPPORTED_OUTPUT_TYPES) and then
import DEFAULT_STORED_CONFIGS into
src/lightspeed_evaluation/core/models/system.py so the Field default for
summary_config_sections uses that constant; ensure the import statement includes
DEFAULT_STORED_CONFIGS and no other changes to the Field definition in the
System model.
Description
Add all configuration information to the JSON and TXT file.
Add metric level metadata to CSV file.
Type of change
Tools used to create PR
Assisted-by: Claude
Generated by: Claude-4.5-sonnet
Related Tickets & Documents
Related Issue: RSPEED-2121
Closes: LEADS-182
Checklist before requesting a review
Testing
test_generator.pyandtest_evaluator.pywere updated to support new additions to the codebase.Output Examples
leads-182-json-example.json
leads-182-metrics-metadata-example.csv
leads-182-txt-example.txt
Summary by CodeRabbit
New Features
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.