@@ -129,6 +129,7 @@ def get_evaluation_tables_description() -> str:
129129- `f1_score` (double): F1 score (0-1)
130130- `false_alarm_rate` (double): False alarm rate (0-1)
131131- `false_discovery_rate` (double): False discovery rate (0-1)
132+ - `weighted_overall_score` (double): Weighted overall score (0-1)
132133- `execution_time` (double): Time taken to evaluate (seconds)
133134
134135**Partitioned by**: date (YYYY-MM-DD format)
@@ -147,6 +148,7 @@ def get_evaluation_tables_description() -> str:
147148- `f1_score` (double): Section F1 score (0-1)
148149- `false_alarm_rate` (double): Section false alarm rate (0-1)
149150- `false_discovery_rate` (double): Section false discovery rate (0-1)
151+ - `weighted_overall_score` (double): Weighted overall score (0-1)
150152- `evaluation_date` (timestamp): When the evaluation was performed
151153
152154**Partitioned by**: date (YYYY-MM-DD format)
@@ -168,6 +170,7 @@ def get_evaluation_tables_description() -> str:
168170- `evaluation_method` (string): Method used for comparison (EXACT, FUZZY, SEMANTIC, etc.)
169171- `confidence` (string): Confidence score from extraction process
170172- `confidence_threshold` (string): Confidence threshold used for evaluation
173+ - `weight` (double): Weight assigned to this attribute in the evaluation
171174- `evaluation_date` (timestamp): When the evaluation was performed
172175
173176**Partitioned by**: date (YYYY-MM-DD format)
0 commit comments