Skip to content

automate verification of modified benchmark results #710

@nwoolmer

Description

@nwoolmer

There is currently a split in the ClickBench dataset between results that are submitted by vendors, and results that are re-run by maintainers. This is an artificial data integrity issue.

Instead, there could be a CI job which re-runs results based on which vendor's files were updated. If results were submitted by a vendor alongside their changes, these could be compared, and the job can fail if the results fall outside an acceptable margin of error.

Automating the result verification in this way could save maintainer time in the long run, and vendors are still able to submit results if they think the CI run diverges from expectations.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions