Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion src/current/v26.1/create-sequence.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ The `CREATE SEQUENCE` [statement]({% link {{ page.version.version }}/sql-stateme

## Considerations

- Using a sequence is slower than [auto-generating unique IDs with the `gen_random_uuid()`, `uuid_v4()` or `unique_rowid()` built-in functions]({% link {{ page.version.version }}/sql-faqs.md %}#how-do-i-auto-generate-unique-row-ids-in-cockroachdb) and is likely to cause performance problems due to [hotspots]({% link {{ page.version.version }}/understand-hotspots.md %}). Incrementing a sequence requires a write to persistent storage, whereas auto-generating a unique ID does not. Therefore, use auto-generated unique IDs unless an incremental sequence is preferred or required. For more information, see [Unique ID best practices]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#unique-id-best-practices).
- Using a sequence is slower than [auto-generating unique IDs with the `gen_random_uuid()`, `uuid_v4()` or `unique_rowid()` built-in functions]({% link {{ page.version.version }}/sql-faqs.md %}#how-do-i-auto-generate-unique-row-ids-in-cockroachdb) and is likely to cause performance problems due to [hotspots]({% link {{ page.version.version }}/understand-hotspots.md %}). Incrementing a sequence requires a write to persistent storage. In CockroachDB, all writes are [replicated and must reach a write quorum]({% link {{ page.version.version }}/architecture/replication-layer.md %}). This means that in [multi-region deployments]({% link {{ page.version.version }}/multiregion-overview.md %}), this can add cross-region latency (for example, in a [region survival]({% link {{ page.version.version }}/multiregion-survival-goals.md %}) configuration) and can become a throughput bottleneck for write-heavy workloads. [Cached sequences](#cache-sequence-values-in-memory-per-node) can reduce the frequency of these writes, though gaps in sequence values may occur if cached values are lost. Auto-generating a unique ID does not require a replicated write. Therefore, use auto-generated unique IDs unless an incremental sequence is preferred or required. For more information, see [Unique ID best practices]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#unique-id-best-practices).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

non-blocking nit:

Suggested change
- Using a sequence is slower than [auto-generating unique IDs with the `gen_random_uuid()`, `uuid_v4()` or `unique_rowid()` built-in functions]({% link {{ page.version.version }}/sql-faqs.md %}#how-do-i-auto-generate-unique-row-ids-in-cockroachdb) and is likely to cause performance problems due to [hotspots]({% link {{ page.version.version }}/understand-hotspots.md %}). Incrementing a sequence requires a write to persistent storage. In CockroachDB, all writes are [replicated and must reach a write quorum]({% link {{ page.version.version }}/architecture/replication-layer.md %}). This means that in [multi-region deployments]({% link {{ page.version.version }}/multiregion-overview.md %}), this can add cross-region latency (for example, in a [region survival]({% link {{ page.version.version }}/multiregion-survival-goals.md %}) configuration) and can become a throughput bottleneck for write-heavy workloads. [Cached sequences](#cache-sequence-values-in-memory-per-node) can reduce the frequency of these writes, though gaps in sequence values may occur if cached values are lost. Auto-generating a unique ID does not require a replicated write. Therefore, use auto-generated unique IDs unless an incremental sequence is preferred or required. For more information, see [Unique ID best practices]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#unique-id-best-practices).
- Using a sequence is slower than [auto-generating unique IDs with the `gen_random_uuid()`, `uuid_v4()`, or `unique_rowid()` built-in functions]({% link {{ page.version.version }}/sql-faqs.md %}#how-do-i-auto-generate-unique-row-ids-in-cockroachdb) and is likely to cause performance problems due to [hotspots]({% link {{ page.version.version }}/understand-hotspots.md %}). Incrementing a sequence requires a write to persistent storage. In CockroachDB, all writes are [replicated and must reach a write quorum]({% link {{ page.version.version }}/architecture/replication-layer.md %}). In [multi-region deployments]({% link {{ page.version.version }}/multiregion-overview.md %}), replicated writes can add cross-region latency (for example, in a [region survival]({% link {{ page.version.version }}/multiregion-survival-goals.md %}) configuration) and become a throughput bottleneck for write-heavy workloads. [Cached sequences](#cache-sequence-values-in-memory-per-node) can reduce the frequency of these writes, though gaps in sequence values may occur if cached values are lost. Auto-generating a unique ID does not require a replicated write. Therefore, use auto-generated unique IDs unless an incremental sequence is preferred or required. For more information, refer to [Unique ID best practices]({% link {{ page.version.version }}/performance-best-practices-overview.md %}#unique-id-best-practices).

- A column that uses a sequence can have a gap in the sequence values if a transaction advances the sequence and is then rolled back. Sequence updates are committed immediately and aren't rolled back along with their containing transaction. This is done to avoid blocking concurrent transactions that use the same sequence.
- {% include {{page.version.version}}/performance/use-hash-sharded-indexes.md %}
- By default, you cannot create sequences that are [owned by]({% link {{ page.version.version }}/security-reference/authorization.md %}#object-ownership) columns in tables in other databases. You can enable such sequence creation by setting the `sql.cross_db_sequence_owners.enabled` [cluster setting]({% link {{ page.version.version }}/cluster-settings.md %}) to `true`.
Expand Down
Loading