Skip to content

Conversation

@Alhanaqtah
Copy link
Contributor

@Alhanaqtah Alhanaqtah commented Jan 4, 2026

Description

The tests have been finalized to use the new API (b.Loop()). Also, the dependence on b.N in determining the size of the generated data has been eliminated.

  • I have read and followed all requirements in CONTRIBUTING.md;
  • I used LLM/AI assistance to make this pull request;

If you have used LLM/AI assistance please provide model name and full prompt:

Model: {{model-name}}
Prompt: {{prompt}}

@Alhanaqtah Alhanaqtah changed the base branch from 263-tidy-up-benchmarks to main January 11, 2026 08:54
@dkharms
Copy link
Member

dkharms commented Jan 20, 2026

@Alhanaqtah hey! Sorry for a quite late answer.

Could you please fix issues I've left and also do following:

  • fix pull request title (you can find rules in contribution guidelines);
  • remove Fixes #263 line from description - I want do add new benchmarks as a part of this issue;

shell: bash
run: |
COUNT=$([[ "${{ github.event_name }}" == "pull_request" ]] && echo 1 || echo 5)
COUNT=$([[ "${{ github.event_name }}" == "pull_request" ]] && echo 5 || echo 5)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One of several goals of this issue is reducing benchmarks execution time therefore increase amount of benchmarks iterations.

Could you increase COUNT:

  • for pull-requests to 10;
  • otherwise to 15;

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I face this error becaouse previously fixed continuous-banchmarks.yml:

[remote rejected] 263-tidy-up-benchmarks -> 263-tidy-up-benchmarks (refusing to allow a Personal Access Token to create or update workflow `.github/workflows/continuous-benchmarks.yml` without `workflow` scope)

So you'll have to do it)

Copy link
Member

@dkharms dkharms Jan 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Try to rebase this branch onto main first. I guess it will fix the issue.

}

func BenchmarkBitmask(b *testing.B) {
func BenchmarkBitmask_HasBitsIn(b *testing.B) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's stick with previous naming to preserve history of measurements.

FYI, we track them here - https://ozontech.github.io/seq-db/

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow, that's interesting. Do you track the bench results in each release to find regressions?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Almost. For each commit that gets merged into main branch.

}
}

func BenchmarkGenerateDocsJSON(b *testing.B) {
Copy link
Member

@dkharms dkharms Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMHO, these benchmarks are useless. They evaluate performance of functions which are used only in test (and only in integration tests, AFAIK).

What do you think about removing such benchmarks?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with you. Maybe I should also delete benchmarks for functions that are not in use: GenerateDocsJSON, GenerateDocs (maybe delete the functions themselves)?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, let's do that.

}
wg.Wait()
logger.Info("list", zap.Int("total", len(list)))
b.Logf("list_total=%d", len(list))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's better to remove any method invocations that can affect measurements and which are not part of the benchmarkable code.


// bench for base point
// you can't go faster
func BenchmarkCopy(b *testing.B) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's skip this benchmark as well.

}

// base point
func BenchmarkIterate(b *testing.B) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's skip this benchmark as well.

}
}

func BenchmarkStatic(b *testing.B) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's skip this benchmark as well.

@Alhanaqtah Alhanaqtah changed the title benchmarks tidied up refactor: benchmarks tidied up Jan 24, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants