diff --git a/architecture/architecture-overview.mdx b/architecture/architecture-overview.mdx index f4b7b804..e5d1cdba 100644 --- a/architecture/architecture-overview.mdx +++ b/architecture/architecture-overview.mdx @@ -1,13 +1,13 @@ --- title: "Architecture Overview" -description: "The core components of PowerSync are the service and client SDKs" +description: "The core components of PowerSync are the service and client SDKs." --- -The [PowerSync Service](/architecture/powersync-service) and client SDK operate in unison to keep client-side SQLite databases in sync with a backend database. Learn about their architecture: +The [PowerSync Service](/architecture/powersync-service) and client SDK operate in unison to keep client-side SQLite databases in sync with a backend source database. Learn about their architecture: @@ -16,17 +16,10 @@ The [PowerSync Service](/architecture/powersync-service) and client SDK operate ### Protocol -Learn about the sync protocol used between PowerSync clients and a [PowerSync Service](/architecture/powersync-service): +Learn about the sync protocol used between PowerSync clients and the [PowerSync Service](/architecture/powersync-service): -### Self-Hosted Architecture - -For more details on typical architecture of a production self-hosted deployment, see here: - - - - \ No newline at end of file diff --git a/architecture/client-architecture.mdx b/architecture/client-architecture.mdx index 0900900a..36588461 100644 --- a/architecture/client-architecture.mdx +++ b/architecture/client-architecture.mdx @@ -2,70 +2,76 @@ title: "Client Architecture" --- -### Reading and Writing Data +The [PowerSync Client SDK](/client-sdks/overview) is embedded into a software application. -From the client-side perspective, there are two data flow paths: +The Client SDK manages the client connection to the [PowerSync Service](/architecture/powersync-service), authenticating via a [JWT](/configuration/auth/overview). The connection between the client and the PowerSync Service is encrypted, and either uses HTTP streams or WebSockets (depending on the specific [Client SDK](/client-sdks/overview) being used) -* Reading data from the server or downloading data (to the SQLite database) -* Writing changes back to the server, or uploading data (from the SQLite database) - -#### Reading Data - -App clients always read data from a local SQLite database. The local database is asynchronously hydrated from the PowerSync Service. - -A developer configures [Sync Rules](/usage/sync-rules) for their PowerSync instance to control which data is synced to which users. - -The PowerSync Service connects directly to the backend database and uses a change stream to hydrate dynamic data partitions, called [sync buckets](/usage/sync-rules/organize-data-into-buckets). Sync buckets are used to partition data according to the configured Sync Rules. (In most use cases, only a subset of data is required in a client's database and not a copy of the entire backend database.) - - - - - -The local SQLite database embedded in the PowerSync SDK is automatically kept in sync with the backend database, based on the [Sync Rules](/usage/sync-rules) configured by the developer: +The Client SDK provides access to a managed [SQLite](/resources/faq#why-does-powersync-use-sqlite-as-the-client-side-database) database that is automatically kept in sync with the backend source database via the PowerSync Service, based on the [Sync Rules](/sync/rules/overview) that are active on the PowerSync Service instance. -#### Writing Data -Client-side data modifications, namely updates, deletes and inserts, are persisted in the embedded SQLite database as well as stored in an upload queue. The upload queue is a blocking [FIFO](https://en.wikipedia.org/wiki/FIFO_%28computing_and_electronics%29) queue that gets processed when network connectivity is available. +## Reading Data (SQLite) -Each entry in the queue is processed by writing the entry to your existing backend application API, using a function [defined by you](/installation/client-side-setup/integrating-with-your-backend) (the developer). This is to ensure that existing backend business logic is honored when uploading data changes. For more information, see the section on [integrating with your backend](/installation/client-side-setup/integrating-with-your-backend). +App clients always read data from the client-side [SQLite](https://sqlite.org/) database. When the user is online and the app is connected to the PowerSync Service, changes on the source database reflect in real-time in the SQLite database, and [Live Queries / Watch Queries](/client-sdks/watch-queries) allows the app UI to have real-time reactivity too. - - - -### Schema +## Client-Side Schema and SQLite Database Structure -On the client, the application [defines a schema](/installation/client-side-setup/define-your-schema) with tables, columns and indexes. +When you implement the PowerSync Client SDK in your application, you need to define a [client-side schema](/intro/setup-guide#define-your-client-side-schema) with tables, columns and indexes that correspond to your [Sync Rules](/sync/rules/overview). You provide this schema when the PowerSync-managed SQLite database is [instantiated](/intro/setup-guide#instantiate-the-powersync-database). -These are then usable as if they were actual SQLite tables, while in reality these are created as SQLite views. +The tables defined in your client-side schema are usable in SQL queries as if they were actual SQLite tables, while in reality they are created as _SQLite views_ based on the schemaless JSON data being synced (see [PowerSync Protocol](/architecture/powersync-protocol)). -The client SDK maintains the following tables: -1. `ps_data__` This contains the data for `
` , in JSON format. This table's schema does not change when columns are added, removed or changed. +The PowerSync Client SDK automatically maintains the following tables in the SQLite database: -2. `ps_data_local__
` Same as the above, but for local-only tables. +1. `ps_data__
` - This contains the data for each "table", in JSON format. Since JSON is being used, this table's schema does not change when columns are added, removed or changed in the Sync Rules and client-side schema. -3. `
` (VIEW) - this is a view on the above table, with each defined column extracted from the JSON field. For example, a "description" text column would be `CAST(data ->> '$.description' as TEXT)`. +2. `ps_data_local__
` - Same as the previous point, but for [local-only](/client-sdks/advanced/local-only-usage) tables. + +3. `
` (`VIEW`) - These are views on the above `ps_data` tables, with each defined column in the client-side schema extracted from the JSON. For example, a `description` text column would be `CAST(data ->> '$.description' as TEXT)`. 4. `ps_untyped` - Any synced table that does is not defined in the client-side schema is placed here. If the table is added to the schema at a later point, the data is then migrated to `ps_data__
`. -5. `ps_oplog` - This is data as received by the [PowerSync Service](/architecture/powersync-service), grouped per bucket. +5. `ps_oplog` - This is operation history data as received from the [PowerSync Service](/architecture/powersync-service), grouped per bucket. -6. `ps_crud` - The local upload queue. +6. `ps_crud` - The client-side upload queue (see [Writing Data](#writing-data) below) 7. `ps_buckets` - A small amount of metadata for each bucket. -8. `ps_migrations` - Table keeping track of SDK schema migrations. +8. `ps_migrations` - Table keeping track of Client SDK schema migrations. + +Most rows will be present in at least two tables — the `ps_data__
` table, and in `ps_oplog`. + +The copy of the row in `ps_oplog` may be newer than the one in `ps_data__
`. This is because of the checkpoint system in PowerSync that gives the system its consistency properties. When a full [checkpoint](/architecture/consistency) has been downloaded, data is copied over from `ps_oplog` to the individual `ps_data__
` tables. -Most rows will be present in at least two tables — the `ps_data__
` table, and in `ps_oplog`. It may be present multiple times in `ps_oplog`, if it was synced via multiple buckets. +It is possible for different [buckets](/architecture/powersync-service#bucket-system) in Sync Rules to include overlapping data (for example, if multiple buckets query data from the same table). If rows with the same table and ID have been synced via multiple buckets, it may be present multiple times in `ps_oplog`, but only one will be preserved in the `ps_data__
` table (the one with the highest `op_id`). -The copy in `ps_oplog` may be newer than the one in `ps_data__
`. Only when a full checkpoint has been downloaded, will the data be copied over to the individual tables. If multiple rows with the same table and id has been synced, only one will be preserved (the one with the highest `op_id`). - If you run into limitations with the above JSON-based SQLite view system, check out [the Raw Tables experimental feature](/usage/use-case-examples/raw-tables) which allows you to define and manage raw SQLite tables to work around some limitations. We are actively seeking feedback about this functionality. - \ No newline at end of file + **Raw Tables Instead of JSON-Backed SQLite Views**: If you run into limitations with the above JSON-based SQLite view system, check out the [Raw Tables experimental feature](/client-sdks/advanced/raw-tables) which allows you to define and manage raw SQLite tables to work around some of the limitations of PowerSync's default JSON-backed SQLite views system. We are actively seeking feedback on the raw tables functionality. + + + +## Writing Data (via SQLite Database and Upload Queue) + +Any mutations on the SQLite database, namely updates, deletes and inserts, are immediately reflected in the SQLite database, and also also automatically placed into an **upload queue** by the Client SDK. + +The upload queue is a blocking [FIFO](https://en.wikipedia.org/wiki/FIFO_%28computing_and_electronics%29) queue. + +The upload queue is automatically managed by the PowerSync Client SDK. + +The Client SDK processes the upload queue by invoking an `uploadData()` function [that you define](/configuration/app-backend/client-side-integration) when you integrate the Client SDK. Your `uploadData()` function implementation should call your [backend application API](/configuration/app-backend/setup) to persist the mutations to the backend source database. + +The reason why we designed PowerSync this way is that it allows you to apply your own backend business logic, validations and authorization to any mutations going to your source database. + +The PowerSync Client SDK automatically takes care of network failures and retries. If processing the upload queue fails (e.g. because the user is offline), it is automatically retried. + + + + + + diff --git a/architecture/consistency.mdx b/architecture/consistency.mdx index 4d08a4b1..f287dd30 100644 --- a/architecture/consistency.mdx +++ b/architecture/consistency.mdx @@ -1,67 +1,75 @@ --- title: "Consistency" -description: 'PowerSync uses the concept of "checkpoints" to ensure the data is consistent.' +description: 'PowerSync uses the concept of "checkpoints" to ensure that data is consistent.' --- -## PowerSync: Designed for causal+ consistency +## PowerSync: Designed for Causal+ Consistency -PowerSync is designed to have [Causal+ Consistency](https://jepsen.io/consistency/models/causal), while providing enough flexibility for applications to perform their own data validations and conflict handling. +PowerSync is designed to have [causal+ consistency](https://jepsen.io/consistency/models/causal), while providing enough flexibility for applications to perform their own data validations and conflict handling. PowerSync's consistency properties have been [tested and verified](https://github.com/nurturenature/jepsen-powersync#readme). -## How it works: Checkpoints +## How It Works: Checkpoints A checkpoint is a single point-in-time on the server (similar to an [LSN in Postgres](https://www.postgresql.org/docs/current/datatype-pg-lsn.html)) with a consistent state — only fully committed transactions are part of the state. -The client only updates its local state when it has all the data matching a checkpoint, and then it updates the state to exactly match that of the checkpoint. There is no intermediate state while downloading large sets of changes such as large server-side transactions. Different tables and sync buckets are all included in the same consistent checkpoint, to ensure that the state is consistent over all data in the app. +The client only updates its local state when it has all the data matching a checkpoint, and then it updates the state to exactly match that of the checkpoint. There is no intermediate state while downloading large sets of changes such as large server-side transactions. Different tables and [buckets](/architecture/powersync-service#bucket-system) are all included in the same consistent checkpoint, to ensure that the state is consistent over all data in the client. -## Local client changes +## Client-Side Mutations -Local changes are applied on top of the last checkpoint received from the server, as well as being persisted into an upload queue. +Client-side mutations are applied on top of the last checkpoint received from the server, as well as being persisted into an [upload queue](/architecture/client-architecture#writing-data-via-sqlite-database-and-upload-queue). -While changes are present in the upload queue, the client does not advance to a new checkpoint. This means the client never has to resolve conflicts locally. +While mutations are present in the upload queue, the client does not advance to a new checkpoint. This means the client never has to resolve conflicts locally. -Only once all the local changes have been acknowledged by the server, and the data for that new checkpoint is downloaded by the client, does the client advance to the next checkpoint. This ensures that the operations are always ordered correctly on the client. +Only once all the client-side mutations have been acknowledged by the server, and the data for that new checkpoint is downloaded by the client, does the client advance to the next checkpoint. This ensures that the operations are always ordered correctly on the client. -## Types of local operations + +There is one nuanced case here, which is buckets with [Priority 0](/sync/advanced/prioritized-sync#special-case:-priority-0) if you are using [Prioritized Syncing](/sync/advanced/prioritized-sync). + -The client automatically records changes to the local database as PUT, PATCH or DELETE operations — corresponding to INSERT, UPDATE or DELETE statements. These are grouped together in a batch per local transaction. +## Types of Client-Side Mutations/Operations -Since the developer has full control over how operations are applied, more advanced operations can be modeled on top of these three. For example an insert-only "operations" table can be added, that records additional metadata for individual operations. +The client automatically records mutations to the client-side database as `PUT`, `PATCH` or `DELETE` operations — corresponding to `INSERT`, `UPDATE` or `DELETE` statements in SQLite. These are grouped together in a batch per client-side transaction. -## Validation and conflict handling +Since the [developer has full control](/architecture/client-architecture#writing-data-via-sqlite-database-and-upload-queue) over how mutations are applied to the source database, more advanced operations can be modeled on top of these three. See [Custom Conflict Resolution](/handling-writes/custom-conflict-resolution) for examples. -With PowerSync offering full flexibility in how changes are applied on the server, it is also the developer's responsibility to implement this correctly to avoid consistency issues. + +## Validation and Conflict Handling + +With PowerSync offering [full flexibility](/architecture/client-architecture#writing-data-via-sqlite-database-and-upload-queue) in how mutations are applied on the server, it is also the developer's responsibility to implement this correctly to avoid consistency issues. Some scenarios to consider: -While the client was offline, a record was modified locally. By the time the client is online again, that record has been deleted. Some options for handling the change: +While the client was offline, a row was modified on the client-side. By the time the client is online again, that row has been deleted on the source database. Some options for handling the mutation in your backend: -* Discard the change. +* Discard the mutation. * Discard the entire transaction. -* Re-create the record. -* Record the change elsewhere, potentially notifying the user and allowing the user to resolve the issue. +* Re-create the row. +* Record the failed mutation elsewhere, potentially notifying the user and allowing the user to resolve the issue. Some other examples include foreign-key or not-null constraints, maximum size of numeric fields, unique constraints, and access restrictions (such as row-level security policies). -With an online-only application, the user typically sees the error as soon as it occurs, and can make changes as required. In an offline-capable application, these errors may occur much later than when the change was made, so more care is required to handle these cases. +In an online-only application, the user typically sees the error as soon as it occurs, and can correct the issue as required. In an offline-capable application that syncs asynchronously with the server, these errors may occur much later than when the mutation was made, so more care is required to handle these cases. -Special care must be taken so that issues such as those do not block the upload queue — the queue cannot advance if the server does not acknowledge a change. +Special care must be taken so that issues such as those do not block the upload queue. The upload queue in the PowerSync Client SDK is a blocking [FIFO](https://en.wikipedia.org/wiki/FIFO_%28computing_and_electronics%29) queue, and the queue cannot advance if the backend does not acknowledge a mutation. And as mentioned above, if the queue cannot be cleared, the client does not move on to the next checkpoint of synced data. There is no single correct choice on how to handle write failures such as mentioned above — the best action depends on the specific application and scenario. However, we do have some suggestions for general approaches: -1. In general, consider relaxing constraints somewhat on the server where it is not absolutely important. It may be better to accept data that is somewhat inconsistent (e.g. a client not applying all expected validations), rather than discarding the data completely. -2. If it is critical to preserve all client changes and preserve the order of changes: - 1. Block the client's queue on unexpected errors (don't acknowledge the change). +1. In general, consider relaxing constraints somewhat on the backend where they are not absolutely required. It may be better to accept data that is somewhat inconsistent (e.g. a client not applying all expected validations), rather than discarding the data completely. +2. If it is critical to preserve all client mutations and preserve the order of mutations: + 1. Block the client's upload queue on unexpected errors (don't acknowledge the mutation in your backend API). 2. Implement error monitoring to be notified of issues, and resolve the issues as soon as possible. -3. If it is critical to preserve all client changes, but the exact order may not be critical: - 1. On a constraint error, persist the transaction in a separate server-side queue, and acknowledge the change. - 2. The server-side queue can then be inspected and retried asynchronously, without blocking the client-side queue. -4. If it is acceptable to lose some changes due to constraint errors: - 1. Discard the change, or the entire transaction if the changes must all be applied together. +3. If it is critical to preserve all client mutations, but the exact order may not be critical: + 1. On a constraint error, persist the transaction in a separate queue on your backend, and acknowledge the change. + 2. The backend queue can then be inspected and retried asynchronously, without blocking the client-side upload queue. +4. If it is acceptable to lose some mutations due to constraint errors: + 1. Discard the mutation, or the entire transaction if the changes must all be applied together. 2. Implement error notifications to detect these issues. See also: -* [Handling Update Conflicts](/usage/lifecycle-maintenance/handling-update-conflicts) -* [Custom Conflict Resolution](/usage/lifecycle-maintenance/handling-update-conflicts/custom-conflict-resolution) +* [Handling Update Conflicts](/handling-writes/handling-update-conflicts) +* [Custom Conflict Resolution](/handling-writes/custom-conflict-resolution) + + +## Questions? If you have any questions about consistency, please [join our Discord](https://discord.gg/powersync) to discuss. diff --git a/architecture/powersync-protocol.mdx b/architecture/powersync-protocol.mdx index 82489d83..8408fe73 100644 --- a/architecture/powersync-protocol.mdx +++ b/architecture/powersync-protocol.mdx @@ -2,56 +2,66 @@ title: "PowerSync Protocol" --- -This contains a broad overview of the sync protocol used between PowerSync clients and a [PowerSync Service](/architecture/powersync-service) instance. -For details, see the implementation in the various client SDKs. +This contains a broad overview of the sync protocol used between PowerSync clients and the [PowerSync Service](/architecture/powersync-service). +For details, see the implementation in the various PowerSync Client SDKs. ## Design The PowerSync protocol is designed to efficiently sync changes to clients, while maintaining [consistency](/architecture/consistency) and integrity of data. -The same process is used to download the initial set of data, bulk download changes after being offline for a while, and incrementally stream changes while connected. +The same process is used for: +* Downloading the initial set of data +* Bulk downloading changes after being offline for a while +* And incrementally streaming changes while connected. ## Concepts ### Buckets -All synced data is grouped into [buckets](/usage/sync-rules/organize-data-into-buckets). A bucket represents a collection of synced rows, synced to any number of users. +All synced data is grouped into [buckets](/architecture/powersync-service#bucket-system). A bucket represents a collection of synced rows, synced to any number of users. -[Buckets](/usage/sync-rules/organize-data-into-buckets) is a core concept that allows PowerSync to efficiently scale to thousands of concurrent users, incrementally syncing changes to hundreds of thousands of rows to each. +[Buckets](/architecture/powersync-service#bucket-system) is a core concept that allows PowerSync to efficiently scale to tens of thousands of concurrent clients per PowerSync Service instance, and incrementally sync changes to hundreds of thousands of rows (or even [a million or more](/resources/performance-and-limits#sync-powersync-service-→-client)) to each client. -Each bucket keeps an ordered list of changes to rows within the bucket — generally as "PUT" or "REMOVE" operations. +Each bucket keeps an ordered list of changes to rows within the bucket (operation history) — generally as `PUT` or `REMOVE` operations. + +* `PUT` is the equivalent of `INSERT OR REPLACE` +* `REMOVE` is slightly different from `DELETE`: a row is only deleted from the client if it has been removed from _all_ buckets synced to the client. + + +As a practical example of how buckets manifest themselves, let's say you have a bucket named `user_todo_lists` that contains the to-do lists for a user, and that bucket utilizes a `user_id` parameter (which will be obtained from the JWT). Now let's say users with IDs `A` and `B` exist in the source database. PowerSync will then replicate data from the source database and create individual buckets with bucket IDs `user_todo_lists["A"]` and `user_todo_lists["B"]`. + +As you can see, buckets are essentially scoped by their parameters (`A` and `B` in this example), so they are always synced as a whole. For user `A` to receive only their relevant to-do lists, they would sync the entire contents of the bucket `user_todo_lists["A"]` + -* PUT is the equivalent of "INSERT OR REPLACE" -* REMOVE is slightly different from "DELETE": a row is only deleted from the client if it has been removed from _all_ buckets synced to the client. ### Checkpoints -A checkpoint is a sequential id that represents a single point-in-time for consistency purposes. This is further explained in [Consistency](/architecture/consistency). +A checkpoint is a sequential ID that represents a single point-in-time for consistency purposes. This is further explained in [Consistency](/architecture/consistency). -### Checksums +### Checksums for Verifying Data Integrity -For any checkpoint, the client and server can compute a per-bucket checksum. This is essentially the sum of checksums of individual operations within the bucket, which each individual checksum being a hash of the operation data. +For any checkpoint, the client and server compute a per-bucket checksum. This is essentially the sum of checksums of individual operations within the bucket, which each individual checksum being a hash of the operation data. -The checksum helps to ensure that the client has all the correct data. If the bucket data changes on the server, for example because of a manual edit to the underlying bucket storage, the checksums will stop matching, and the client will re-download the entire bucket. +The checksum helps to ensure that the client has all the correct data. In the hypothetical scenario where the bucket data becomes corrupted on the PowerSync Service, the checksums will stop matching, and the client will re-download the entire bucket. -Note: Checksums are not a cryptographically secure method to verify data integrity. Rather, it is designed to detect simple data mismatches, whether due to bugs, manual data modification, or other corruption issues. +Note: Checksums are not a cryptographically secure method to verify data integrity. Rather, it is designed to detect simple data mismatches, whether due to bugs, bucket data tampering, or other corruption issues. ### Compacting -To avoid indefinite growth in size of buckets, the history of a bucket can be compacted. Stale updates are replaced with marker entries, which can be merged together, while keeping the same checksums. +To avoid indefinite growth in size of buckets, the operation history of a bucket can be [compacted](/maintenance-ops/compacting-buckets). Stale updates are replaced with marker entries, which can be merged together, while keeping the same checksums. ## Protocol A client initiates a sync session using: -1. A JWT token that typically contains the user\_id, and additional parameters (optional). -2. A list of current buckets and the latest operation id in each. +1. A JWT token that typically contains the `user_id`, and additional parameters (optional). +2. A list of current buckets that the client has, and the latest operation ID in each. -The server then responds with a stream of: +The server then responds with a stream of: -1. "Checkpoint available": A new checkpoint id, with a checksum for each bucket in the checkpoint. -2. "Data": New operations for the above checkpoint for each relevant bucket, starting from the last operation id as sent by the client. -3. "Checkpoint complete": Sent once all data for the checkpoint have been sent. +1. **Checkpoint available**: A new checkpoint ID, with a checksum for each bucket in the checkpoint. +2. **Data**: New operations for the above checkpoint for each relevant bucket, starting from the last operation ID as sent by the client. +3. **Checkpoint complete**: Sent once all data for the checkpoint have been sent. The server then waits until a new checkpoint is available, then repeats the above sequence. @@ -59,12 +69,12 @@ The stream can be interrupted at any time, at which point the client will initia If a checksum validation fails on the client, the client will delete the bucket and start a new sync session. -Data for individual rows are represented using JSON. The protocol itself is schemaless - the client is expected to use their own copy of the schema, and gracefully handle schema differences. +Data for individual rows are represented [using JSON](/architecture/client-architecture#client-side-schema-and-sqlite-database-structure). The protocol itself is schemaless — the client is expected to use their own copy of the schema, and gracefully handle schema differences. #### Write Checkpoints -Write checkpoints are used to ensure clients have synced their own changes back before applying downloaded data locally. +Write checkpoints are used to ensure clients have synced their own mutations back before applying downloaded data locally. -Creating a write checkpoint is a separate operation, which is performed by the client after all data has been uploaded. It is important that this happens after the data has been written to the backend source database. +Creating a write checkpoint is a separate operation, which is performed by the client after all mutations has been uploaded (i.e. the client's [upload queue](/architecture/client-architecture#writing-data-via-sqlite-database-and-upload-queue) has been successfully fully processed and is empty). It is [important](/handling-writes/writing-client-changes#why-must-my-write-endpoint-be-synchronous) that this happens after the data has been written to the backend source database. -The server then keeps track of the current CDC stream position on the database (LSN in Postgres and SQL Server, resume token in MongoDB and GTID+Binlog Position in MySQL), and notifies the client when the data has been replicated, as part of checkpoint data in the normal data stream. +The server then keeps track of the current CDC stream position on the database (LSN in Postgres and SQL Server, resume token in MongoDB and GTID+Binlog Position in MySQL), and notifies the client when the data has been replicated, as part of checkpoint data in the normal data stream. diff --git a/architecture/powersync-service.mdx b/architecture/powersync-service.mdx index 53001ec1..ddf78179 100644 --- a/architecture/powersync-service.mdx +++ b/architecture/powersync-service.mdx @@ -2,39 +2,96 @@ title: "PowerSync Service" --- -Each PowerSync instance runs a copy of the PowerSync Service. The primary purpose of this service is to stream changes to clients. -This service has the following components: +When we say "PowerSync instance" we are referring to an instance of the [PowerSync Service](https://github.com/powersync-ja/powersync-service), which is the server-side component of the sync engine responsible for the _read path_ from the source database to client-side SQLite databases: The primary purposes of the PowerSync Service are (1) replicating data from your source database (Postgres, MongoDB, MySQL, SQL Server), and (2) streaming data to clients. Both of these happen based on your _Sync Rules_ or _Sync Streams_ configuration. -## Replication -The service continuously replicates data from the source database, then: +## Bucket System -1. Pre-processes the data according to the [sync rules](/usage/sync-rules) (both data queries and parameter queries), splitting data into [sync buckets](/usage/sync-rules/organize-data-into-buckets) and transforming the data if required. -2. Persists each operation into the relevant sync buckets, ready to be streamed to clients. +The concept of _buckets_ is core to PowerSync and its scalability. -The recent history of operations to each row is stored, not only the current version. This supports the "append-only" structure of sync buckets, which allows clients to efficiently stream changes while maintaining data integrity. Sync buckets can be compacted to avoid an ever-growing history. +_Buckets_ are basically partitions of data that allows the PowerSync Service to efficiently query the correct data that a specific client needs to sync. -Replication is initially performed by taking a snapshot of all tables defined in the sync rules, then data is incrementally replicated using [logical replication](https://www.postgresql.org/docs/current/logical-replication.html). When sync rules are updated, this process restarts with a new snapshot. +When you define [Sync Rules](/sync/rules/overview), you define the different buckets that exist, and you define which [parameters](/sync/rules/parameter-queries) are used for each bucket. -## Authentication +**Sync Streams: Implicit Buckets**: In our new [Sync Streams](/sync/streams) system which is in [early alpha](/sync/overview), buckets and parameters are not explicitly defined, and are instead implicit based on the streams, their queries and subqueries. + +For example, let's say you have a bucket named `user_todo_lists` that contains the to-do lists for a user, and that bucket utilizes a `user_id` parameter (which will be embedded in the JWT) to scope those to-do lists. + +Now let's say users with IDs `1`, `2` and `3` exist in the source database. PowerSync will then replicate data from the source database and create individual buckets with bucket IDs of `user_todo_lists["1"]`, `user_todo_lists["2"]` and `user_todo_lists["3"]`. + +If a user with `user_id=1` in its JWT connects to the PowerSync Service and syncs data, PowerSync can very efficiently look up the appropriate bucket to sync, i.e. `user_todo_lists["1"]`. + + +As you can see above, a bucket's definition name and set of parameter values together form its _bucket ID_, for example `user_todo_lists["1"]`. If a bucket makes use of multiple parameters, they are comma-separated in the bucket ID, for example `user_todos["user1","admin"]` + + + +### Deduplication for Scalability + +The bucket system also allows for high-scalability because it _deduplicates_ data that is shared between different users. + +For example, let's pretend that instead of `user_todo_lists`, we have `org_todo_lists` buckets, each containing the to-do lists for an _organization_., and we use an `organization_id` parameter from the JWT for this bucket. Now let's pretend that both users with IDs `1` and `2` both belong to an organization with an ID of `1`. In this scenario, both users `1` and `2` will sync from a bucket with a bucket ID of `org_todo_lists["1"]`. + +This also means that the PowerSync Service has to keep track of less state per-user — and therefore, server-side resource requirements don't scale linearly with the number of users/clients. + + +## Operation History + +Each bucket stores the _recent history_ of operations on each row, not just the latest state of the row. + +This is another core part of the PowerSync architecure — the PowerSync Service can efficiently query the _operations_ that each client needs to receive in order to be up to date. Tracking of operation history is also key to the data integrity and [consistency](/architecture/consistency) properties of PowerSync. + +When a change occurs in the source database that affects a certain bucket (based on the Sync Rules or Sync Streams configuration), that change will be appended to the operation history in that bucket. Buckets are therefore treated as "append-only" data structures. That being said, to avoid an ever-growing operation history, the buckets can be [compacted](/maintenance-ops/compacting-buckets). + + +## Bucket Storage + +The PowerSync Service persists the bucket state in durable storage: there is a pluggable storage layer for bucket data, and MongoDB and Postgres is currently supported. We refer to this as the _bucket storage_ database and it is separate from the connection to your _source database_ (Postgres, MongoDB, MySQL or SQL Server). Our cloud-hosting offering (PowerSync Cloud) uses MongoDB Atlas as the _bucket storage_ database. + +Persisting the bucket state in a database is also part of how PowerSync achieves high scalability: it means that the PowerSync Service can have a low memory footprint even as you scale to very large volumes of synced data and users/clients. + + +## Replication From the Source Database + +As mentioned above, one of the primary purposes of the PowerSync Service is replicating data from the source database, based on the Sync Rules or Sync Streams configuration: + + + + + +When the PowerSync Service replicates data from the source database, it: + +1. Pre-processes the data according to the [Sync Rules](/sync/rules/overview) or [Sync Streams](/sync/streams/overview), splitting data into _buckets_ (as explained above) and transforming the data if required. +2. Persists each operation into the relevant buckets, ready to be streamed to clients. + + +### Initial Replication vs. Incremental Replication + +Whenever a new version of Sync Rules or Sync Streams are deployed, initial replication takes place by means of taking a snapshot of all tables/collections referenced in the Sync Rules / Streams. + +After that, data is incrementally replicated using a change data capture stream (the specific mechanism depends on the source database type: Postgres logical replication, MongoDB change streams, the MySQL binlog, or SQL Server Change Data Capture). -The service authenticates users using [JWTs](/installation/authentication-setup), before allowing access to data. ## Streaming Sync -Once a user is authenticated: +As mentioned above, the other primary purpose of the PowerSync Service is streaming data to clients. + +The PowerSync Service authenticates clients/users using [JWTs](/configuration/auth/overview). Once a client/user is authenticated: + +1. The PowerSync Service calculates a list of buckets for the user to sync using [Parameter Queries](/sync/rules/parameter-queries). +2. The Service streams any operations added to those buckets since the last time the client/user connected. + +The Service then continuously monitors for buckets that are added or removed, as well as for new operations within those buckets, and streams those changes. -1. The service calculates a list of buckets for the user to sync using [parameter queries](/usage/sync-rules/parameter-queries). -2. The service streams any operations added to those buckets since the last time the user connected. +Only the internal _bucket storage_ of the PowerSync Service is used — the source database is not queried directly during streaming. -The service then continuously monitors for buckets that are added or removed, as well as for new operations within those buckets, and streams those changes. +For more details on exactly how streaming sync works, see [PowerSync Protocol](/architecture/powersync-protocol#protocol). -Only the internal (replicated) storage of the PowerSync Service is used — the source database is not queried directly during streaming. -## Source Code +## Source Code Repo -To access the source code for the PowerSync Service, refer to the [powersync-service](https://github.com/powersync-ja/powersync-service) repo on GitHub. +The repo for the PowerSync Service can be found here: -## See Also + + -* [PowerSync Overview](/intro/powersync-overview) diff --git a/client-sdk-references/.DS_Store b/client-sdk-references/.DS_Store deleted file mode 100644 index e65bc501..00000000 Binary files a/client-sdk-references/.DS_Store and /dev/null differ diff --git a/client-sdk-references/capacitor/javascript-orm-support.mdx b/client-sdk-references/capacitor/javascript-orm-support.mdx deleted file mode 100644 index 7ea0165d..00000000 --- a/client-sdk-references/capacitor/javascript-orm-support.mdx +++ /dev/null @@ -1,6 +0,0 @@ ---- -title: "JavaScript ORM Support" -url: /client-sdk-references/javascript-web/javascript-orm -sidebarTitle: "ORM Support" ---- - diff --git a/client-sdk-references/flutter/api-reference.mdx b/client-sdk-references/flutter/api-reference.mdx deleted file mode 100644 index 98d6392b..00000000 --- a/client-sdk-references/flutter/api-reference.mdx +++ /dev/null @@ -1,4 +0,0 @@ ---- -title: API Reference -url: https://pub.dev/documentation/powersync/latest/powersync/powersync-library.html ---- diff --git a/client-sdk-references/flutter/encryption.mdx b/client-sdk-references/flutter/encryption.mdx deleted file mode 100644 index 9399d3a9..00000000 --- a/client-sdk-references/flutter/encryption.mdx +++ /dev/null @@ -1,4 +0,0 @@ ---- -title: Encryption -url: /usage/use-case-examples/data-encryption ---- diff --git a/client-sdk-references/flutter/usage-examples.mdx b/client-sdk-references/flutter/usage-examples.mdx deleted file mode 100644 index c4da6d02..00000000 --- a/client-sdk-references/flutter/usage-examples.mdx +++ /dev/null @@ -1,245 +0,0 @@ ---- -title: "Usage Examples" -description: "Code snippets and guidelines for common scenarios" ---- - -import FlutterWatch from '/snippets/flutter/basic-watch-query.mdx'; - -## Using transactions to group changes - -Read and write transactions present a context where multiple changes can be made then finally committed to the DB or rolled back. This ensures that either all the changes get persisted, or no change is made to the DB (in the case of a rollback or exception). - -The [writeTransaction(callback)](https://pub.dev/documentation/powersync/latest/sqlite_async/SqliteQueries/writeTransaction.html) method combines all writes into a single transaction, only committing to persistent storage once. - -```dart -deleteList(SqliteDatabase db, String id) async { - await db.writeTransaction((tx) async { - // Delete the main list - await tx.execute('DELETE FROM lists WHERE id = ?', [id]); - // Delete any children of the list - await tx.execute('DELETE FROM todos WHERE list_id = ?', [id]); - }); -} -``` - -Also see [readTransaction(callback)](https://pub.dev/documentation/powersync/latest/sqlite_async/SqliteQueries/readTransaction.html) . - -## Listen for changes in data - -Use [watch](https://pub.dev/documentation/powersync/latest/sqlite_async/SqliteQueries/watch.html) to watch for changes to the dependent tables of any SQL query. - - - -## Insert, update, and delete data in the local database - -Use [execute](https://pub.dev/documentation/powersync/latest/powersync/PowerSyncDatabase/execute.html) to run INSERT, UPDATE or DELETE queries. - -```dart -FloatingActionButton( - onPressed: () async { - await db.execute( - 'INSERT INTO customers(id, name, email) VALUES(uuid(), ?, ?)', - ['Fred', 'fred@example.org'], - ); - }, - tooltip: '+', - child: const Icon(Icons.add), -); -``` - -## Send changes in local data to your backend service - -Override [uploadData](https://pub.dev/documentation/powersync/latest/powersync/PowerSyncBackendConnector/uploadData.html) to send local updates to your backend service. - -```dart -@override -Future uploadData(PowerSyncDatabase database) async { - final batch = await database.getCrudBatch(); - if (batch == null) return; - for (var op in batch.crud) { - switch (op.op) { - case UpdateType.put: - // Send the data to your backend service - // Replace `_myApi` with your own API client or service - await _myApi.put(op.table, op.opData!); - break; - default: - // TODO: implement the other operations (patch, delete) - break; - } - } - await batch.complete(); -} -``` - -## Accessing PowerSync connection status information - -Use [SyncStatus](https://pub.dev/documentation/powersync/latest/powersync/SyncStatus-class.html) and register an event listener with [statusStream](https://pub.dev/documentation/powersync/latest/powersync/PowerSyncDatabase/statusStream.html) to listen for status changes to your PowerSync instance. - - -```dart -class _StatusAppBarState extends State { - late SyncStatus _connectionState; - StreamSubscription? _syncStatusSubscription; - - @override - void initState() { - super.initState(); - _connectionState = db.currentStatus; - _syncStatusSubscription = db.statusStream.listen((event) { - setState(() { - _connectionState = db.currentStatus; - }); - }); - } - - @override - void dispose() { - super.dispose(); - _syncStatusSubscription?.cancel(); - } - - @override - Widget build(BuildContext context) { - final statusIcon = _getStatusIcon(_connectionState); - - return AppBar( - title: Text(widget.title), - actions: [ - ... - statusIcon - ], - ); - } -} - -Widget _getStatusIcon(SyncStatus status) { - if (status.anyError != null) { - // The error message is verbose, could be replaced with something - // more user-friendly - if (!status.connected) { - return _makeIcon(status.anyError!.toString(), Icons.cloud_off); - } else { - return _makeIcon(status.anyError!.toString(), Icons.sync_problem); - } - } else if (status.connecting) { - return _makeIcon('Connecting', Icons.cloud_sync_outlined); - } else if (!status.connected) { - return _makeIcon('Not connected', Icons.cloud_off); - } else if (status.uploading && status.downloading) { - // The status changes often between downloading, uploading and both, - // so we use the same icon for all three - return _makeIcon('Uploading and downloading', Icons.cloud_sync_outlined); - } else if (status.uploading) { - return _makeIcon('Uploading', Icons.cloud_sync_outlined); - } else if (status.downloading) { - return _makeIcon('Downloading', Icons.cloud_sync_outlined); - } else { - return _makeIcon('Connected', Icons.cloud_queue); - } -} -``` - -## Wait for the initial sync to complete - -Use the [hasSynced](https://pub.dev/documentation/powersync/latest/powersync/SyncStatus/hasSynced.html) property (available since version 1.5.1 of the SDK) and register a listener to indicate to the user whether the initial sync is in progress. - -```dart -// Example of using hasSynced to show whether the first sync has completed - -/// Global reference to the database -final PowerSyncDatabase db; - -bool hasSynced = false; -StreamSubscription? _syncStatusSubscription; - -// Use the exposed statusStream -Stream watchSyncStatus() { - return db.statusStream; -} - -@override -void initState() { - super.initState(); - _syncStatusSubscription = watchSyncStatus.listen((status) { - setState(() { - hasSynced = status.hasSynced ?? false; - }); - }); -} - -@override -Widget build(BuildContext context) { - return Text(hasSynced ? 'Initial sync completed!' : 'Busy with initial sync...'); -} - -// Don't forget to dispose of stream subscriptions when the view is disposed -void dispose() { - super.dispose(); - _syncStatusSubscription?.cancel(); -} -``` - -For async use cases, see the [waitForFirstSync](https://pub.dev/documentation/powersync/latest/powersync/PowerSyncDatabase/waitForFirstSync.html) method which returns a promise that resolves once the first full sync has completed. - -## Report sync download progress - -You can show users a progress bar when data downloads using the `downloadProgress` property from the -[SyncStatus](https://pub.dev/documentation/powersync/latest/powersync/SyncStatus/downloadProgress.html) class. -`downloadProgress.downloadedFraction` gives you a value from 0.0 to 1.0 representing the total sync progress. This is especially useful for long-running initial syncs. - -As an example, this widget renders a progress bar when a download is active: - -```dart -import 'package:flutter/material.dart'; -import 'package:powersync/powersync.dart' hide Column; - -class SyncProgressBar extends StatelessWidget { - final PowerSyncDatabase db; - - /// When set, show progress towards the [BucketPriority] instead of towards - /// the full sync. - final BucketPriority? priority; - - const SyncProgressBar({ - super.key, - required this.db, - this.priority, - }); - - @override - Widget build(BuildContext context) { - return StreamBuilder( - stream: db.statusStream, - initialData: db.currentStatus, - builder: (context, snapshot) { - final status = snapshot.requireData; - final progress = switch (priority) { - null => status.downloadProgress, - var priority? => status.downloadProgress?.untilPriority(priority), - }; - - if (progress != null) { - return Center( - child: Column( - children: [ - const Text('Busy with sync...'), - LinearProgressIndicator(value: progress?.downloadedFraction), - Text( - '${progress.downloadedOperations} out of ${progress.totalOperations}') - ], - ), - ); - } else { - return const SizedBox.shrink(); - } - }, - ); - } -} - -``` - -Also see: -- [SyncDownloadProgress API](https://pub.dev/documentation/powersync/latest/powersync/SyncDownloadProgress-extension-type.html) -- [Demo component](https://github.com/powersync-ja/powersync.dart/blob/main/demos/supabase-todolist/lib/widgets/guard_by_sync.dart) diff --git a/client-sdk-references/javascript-web/api-reference.mdx b/client-sdk-references/javascript-web/api-reference.mdx deleted file mode 100644 index 4ddadc23..00000000 --- a/client-sdk-references/javascript-web/api-reference.mdx +++ /dev/null @@ -1,4 +0,0 @@ ---- -title: API Reference -url: https://powersync-ja.github.io/powersync-js/web-sdk ---- diff --git a/client-sdk-references/javascript-web/encryption.mdx b/client-sdk-references/javascript-web/encryption.mdx deleted file mode 100644 index 9399d3a9..00000000 --- a/client-sdk-references/javascript-web/encryption.mdx +++ /dev/null @@ -1,4 +0,0 @@ ---- -title: Encryption -url: /usage/use-case-examples/data-encryption ---- diff --git a/client-sdk-references/javascript-web/javascript-spa-frameworks.mdx b/client-sdk-references/javascript-web/javascript-spa-frameworks.mdx deleted file mode 100644 index 488d83e3..00000000 --- a/client-sdk-references/javascript-web/javascript-spa-frameworks.mdx +++ /dev/null @@ -1,140 +0,0 @@ ---- -title: "JavaScript SPA Frameworks" -description: "Compatibility with SPA frameworks" ---- - -The PowerSync [JavaScript Web SDK](../javascript-web) is compatible with popular Single-Page Application (SPA) frameworks like React, Vue, Angular, and Svelte. - -For [React](#react-hooks) and [Vue](#vue-composables) specifically, wrapper packages are available to support reactivity and live queries, making it easier for developers to leverage PowerSync's features. - -PowerSync also integrates with TanStack libraries, including [TanStack Query](#tanstack-query) for React and [TanStack DB](#tanstack-db) for reactive data management across multiple frameworks. - -### Which package should I choose for queries? - -For React or React Native apps: - -* The [`@powersync/react`](#react-hooks) package is best for most basic use cases, especially when you only need reactive queries with loading and error states. - -* For more advanced scenarios, such as query caching and pagination, [TanStack Query](#tanstack-query) is a powerful solution. The [`@powersync/tanstack-react-query`](#tanstack-query) package extends the `useQuery` hook from `@powersync/react` and adds functionality from [TanStack Query](https://tanstack.com/query/latest/docs/framework/react/overview), making it a better fit for advanced use cases or performance-optimized apps. - -* For reactive data management and live query support across multiple frameworks, consider [TanStack DB](#tanstack-db). PowerSync works with all TanStack DB framework adapters (React, Vue, Solid, Svelte, Angular). - -If you have a Vue app, use the Vue-specific package: [`@powersync/vue`](#vue-composables). - -## React Hooks - - - -The `@powersync/react` package provides React hooks for use with the [JavaScript Web SDK](./) or [React Native SDK](../react-native-and-expo/). These hooks are designed to support reactivity, and can be used to automatically re-render React components when query results update or to access PowerSync connectivity status changes. - -The main hooks available are: - -* `useQuery`: This allows you to access the results of a watched query. The response includes `isLoading`, `isFetching` and `error` properties. - -* `useStatus`: Access the PowerSync connectivity status. This can be used to update the UI based on whether the client is connected or not. - -* `useSuspenseQuery`: This hook also allows you to access the results of a watched query, but its loading and fetching states are handled through [Suspense](https://react.dev/reference/react/Suspense). It automatically converts certain loading/fetching states into Suspense signals, triggering Suspense boundaries in parent components. - - -For advanced watch query features like incremental updates and differential results for React Hooks, see [Live Queries / Watch Queries](/usage/use-case-examples/watch-queries). - - -The full API Reference and example code can be found here: - - - -## TanStack - -PowerSync integrates with multiple TanStack libraries: - -### TanStack Query - -PowerSync integrates with [TanStack Query](https://tanstack.com/query/latest/docs/framework/react/overview) (formerly React Query) through the `@powersync/tanstack-react-query` package. - - - -This package wraps TanStack's `useQuery` and `useSuspenseQuery` hooks, bringing many of TanStack's advanced asynchronous state management features to PowerSync web and React Native applications, including: - -* **Loading and error states** via [`useQuery`](https://tanstack.com/query/latest/docs/framework/react/guides/queries) - -* [**React Suspense**](https://tanstack.com/query/latest/docs/framework/react/guides/suspense) **support**: `useSuspenseQuery` automatically converts certain loading states into Suspense signals, triggering Suspense boundaries in parent components. - -* [**Caching queries**](https://tanstack.com/query/latest/docs/framework/react/guides/caching): Queries are cached with a unique key and reused across the app, so subsequent instances of the same query won't refire unnecessarily. - -* **Built-in support for** [**pagination**](https://tanstack.com/query/latest/docs/framework/react/guides/paginated-queries) - - - #### Additional hooks - - We plan to support more TanStack Query hooks over time. If there are specific hooks you're interested in, please let us know on [Discord](https://discord.gg/powersync). - - -#### Example Use Case - -When navigating to or refreshing a page, you may notice a brief UI "flicker" (10-50ms). Here are a few ways to manage this with TanStack Query: - -* **First load**: When a page loads for the first time, use a loading indicator or a Suspense fallback to handle queries. See the [examples](https://www.npmjs.com/package/@powersync/tanstack-react-query#usage). - -* **Subsequent loads**: With TanStack's query caching, subsequent loads of the same page won't refire queries, which reduces the flicker effect. - -* **Block navigation until components are ready**: Using `useSuspenseQuery`, you can ensure that navigation from page A to page B only happens after the queries for page B have loaded. You can do this by combining `useSuspenseQuery` with the `` element and React Router’s [`v7_startTransition`](https://reactrouter.com/en/main/upgrading/future#v7_starttransition) future flag, which blocks navigation until all suspending components are ready. - -#### Usage and Examples - -For more examples and usage details, see the package [README](https://www.npmjs.com/package/@powersync/tanstack-react-query). - -The full API Reference can be found here: - - - -### TanStack DB - -[TanStack DB](https://tanstack.com/db/latest/docs/collections/powersync-collection) is a reactive client store that provides blazing-fast in-memory queries, optimistic updates, and cross-collection queries. When combined with PowerSync, you get the best of both worlds: TanStack DB's powerful query capabilities with PowerSync's battle-tested offline-first and multi-tab capable sync engine. - - - The PowerSync TanStack DB collection is currently in an [Alpha](/resources/feature-status) release. - - -**TanStack DB Features:** - -* **Blazing Fast In-Memory Queries**: Built on differential data flow, TanStack DB's live queries update incrementally (rather than re-running entire queries), making queries incredibly fast, even for complex queries across multiple collections. - -* **Reactive Data Flow**: Live queries automatically update when underlying data changes, triggering component re-renders only when necessary. - -* **Optimistic Updates**: Mutations apply instantly to the local state, providing immediate user feedback. TanStack DB maintains separate optimistic state that overlays on top of synced data, and automatically rolls back if the server request fails. - -* **Cross-Collection Queries**: Live queries support joins across collections, allowing you to load normalized data and then denormalize it through queries. - -**Framework Support:** - -PowerSync works with all TanStack DB framework adapters: - -* React ([`@tanstack/react-db`](https://tanstack.com/db/latest/docs/framework/react/overview)) -* Vue ([`@tanstack/vue-db`](https://tanstack.com/db/latest/docs/framework/vue/overview)) -* Solid ([`@tanstack/solid-db`](https://tanstack.com/db/latest/docs/framework/solid/overview)) -* Svelte ([`@tanstack/svelte-db`](https://tanstack.com/db/latest/docs/framework/svelte/overview)) -* Angular ([`@tanstack/angular-db`](https://tanstack.com/db/latest/docs/framework/angular/overview)) - -**Documentation:** - -For detailed documentation, examples, and API reference, see the [TanStack DB PowerSync Collection documentation](https://tanstack.com/db/latest/docs/collections/powersync-collection). - -## Vue Composables - - - -The [`powersync/vue`](https://www.npmjs.com/package/@powersync/vue) package is a Vue-specific wrapper for PowerSync. It provides Vue [composables](https://vuejs.org/guide/reusability/composables) that are designed to support reactivity, and can be used to automatically re-render components when query results update or to access PowerSync connectivity status changes. - -The main hooks available are: - -* `useQuery`: This allows you to access the results of a watched query. The response includes `isLoading`, `isFetching` and `error` properties. - -* `useStatus`: Access the PowerSync connectivity status. This can be used to update the UI based on whether the client is connected or not. - - -For advanced watch query features like incremental updates and differential results for Vue Hooks, see [Live Queries / Watch Queries](/usage/use-case-examples/watch-queries). - - -The full API Reference and example code can be found here: - - \ No newline at end of file diff --git a/client-sdk-references/javascript-web/usage-examples.mdx b/client-sdk-references/javascript-web/usage-examples.mdx deleted file mode 100644 index bc789ef3..00000000 --- a/client-sdk-references/javascript-web/usage-examples.mdx +++ /dev/null @@ -1,401 +0,0 @@ ---- -title: "Usage Examples" -description: "Code snippets and guidelines for common scenarios" ---- - -import JavaScriptAsyncWatch from '/snippets/basic-watch-query-javascript-async.mdx'; -import JavaScriptCallbackWatch from '/snippets/basic-watch-query-javascript-callback.mdx'; - -## Multiple Tab Support - - - * Multiple tab support is not currently available on Android. - * For Safari, use the [`OPFSCoopSyncVFS`](/client-sdk-references/javascript-web#sqlite-virtual-file-systems) virtual file system to ensure stable multi-tab functionality. - * If you encounter a `RangeError: Maximum call stack size exceeded` error, see [Troubleshooting](/resources/troubleshooting#rangeerror-maximum-call-stack-size-exceeded-on-ios-or-safari) for solutions. - - -Using PowerSync between multiple tabs is supported on some web browsers. Multiple tab support relies on shared web workers for database and sync streaming operations. When enabled, shared web workers named `shared-DB-worker-[dbFileName]` and `shared-sync-[dbFileName]` will be created. - -#### `shared-DB-worker-[dbFileName]` - -The shared database worker will ensure writes to the database will instantly be available between tabs. - -#### `shared-sync-[dbFileName]` - -The shared sync worker connects directly to the PowerSync backend instance and applies changes to the database. Note that the shared sync worker will call the `fetchCredentials` and `uploadData` method of the latest opened available tab. Closing a tab will shift the latest tab to the previously opened one. - -Currently, using the SDK in multiple tabs without enabling the [enableMultiTabs](https://github.com/powersync-ja/powersync-js/blob/ed5bb49b5a1dc579050304fab847feb8d09b45c7/packages/web/src/db/adapters/web-sql-flags.ts#L23) flag will spawn a standard web worker per tab for DB operations. These workers are safe to operate on the DB concurrently, however changes from one tab may not update watches on other tabs. Only one tab can sync from the PowerSync instance at a time. The sync status will not be shared between tabs, only the oldest tab will connect and display the latest sync status. - -Support is enabled by default if available. This can be disabled as below: - -```js -export const db = new PowerSyncDatabase({ - schema: AppSchema, - database: { - dbFilename: 'my_app_db.sqlite' - }, - flags: { - /** - * Multiple tab support is enabled by default if available. - * This can be disabled by setting this flag to false. - */ - enableMultiTabs: false - } -}); -``` - -## Using transactions to group changes - -Read and write transactions present a context where multiple changes can be made then finally committed to the DB or rolled back. This ensures that either all the changes get persisted, or no change is made to the DB (in the case of a rollback or exception). - -[PowerSyncDatabase.writeTransaction(callback)](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#writetransaction) automatically commits changes after the transaction callback is completed if `tx.rollback()` has not explicitly been called. If an exception is thrown in the callback then changes are automatically rolled back. - -```js -// ListsWidget.jsx -import React, { useState } from 'react'; - -export const ListsWidget = () => { - const [lists, setLists] = useState([]); - - return ( -
-
    - {lists.map((list) => ( -
  • - {list.name} - -
  • - ))} -
- -
- ); -}; -``` - -Also see [PowerSyncDatabase.readTransaction(callback)](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#readtransaction). - -## Listen for changes in data - -Use [PowerSyncDatabase.watch](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#watch) to watch for changes in source tables. - - - - - - - - - - -For advanced watch query features like incremental updates and differential results, see [Live Queries / Watch Queries](/usage/use-case-examples/watch-queries). - -## Insert, update, and delete data in the local database - -Use [PowerSyncDatabase.execute](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#execute) to run INSERT, UPDATE or DELETE queries. - -```js -const handleButtonClick = async () => { - await db.execute( - 'INSERT INTO customers(id, name, email) VALUES(uuid(), ?, ?)', - ['Fred', 'fred@example.org'] - ); -}; - -return ( - -); -``` - -## Send changes in local data to your backend service - -Override [uploadData](https://github.com/powersync-ja/powersync-js/blob/ed5bb49b5a1dc579050304fab847feb8d09b45c7/packages/common/src/client/connection/PowerSyncBackendConnector.ts#L24) to send local updates to your backend service. - - -```js -// Implement the uploadData method in your backend connector -async function uploadData(database) { - const batch = await database.getCrudBatch(); - if (batch === null) return; - - for (const op of batch.crud) { - switch (op.op) { - case 'put': - // Send the data to your backend service - // replace `_myApi` with your own API client or service - await _myApi.put(op.table, op.opData); - break; - default: - // TODO: implement the other operations (patch, delete) - break; - } - } - - await batch.complete(); -} -``` - -## Accessing PowerSync connection status information - -Use [PowerSyncDatabase.connected](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#connected) and register an event listener with [PowerSyncDatabase.registerListener](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#registerlistener) to listen for status changes to your PowerSync instance. - -```js -// Example of using connected status to show online or offline - -// Tap into connected -const [connected, setConnected] = React.useState(powersync.connected); - -React.useEffect(() => { -// Register listener for changes made to the powersync status - return powersync.registerListener({ - statusChanged: (status) => { - setConnected(status.connected); - } - }); -}, [powersync]); - -// Icon to show connected or not connected to powersync -// as well as the last synced time - { - Alert.alert( - 'Status', - `${connected ? 'Connected' : 'Disconnected'}. \nLast Synced at ${powersync.currentStatus?.lastSyncedAt.toISOString() ?? '-' - }\nVersion: ${powersync.sdkVersion}` - ); - }} -/>; -``` - -## Wait for the initial sync to complete - -Use the [hasSynced](https://powersync-ja.github.io/powersync-js/web-sdk/classes/SyncStatus#hassynced) property (available since version 0.4.1 of the SDK) and register an event listener with [PowerSyncDatabase.registerListener](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#registerlistener) to indicate to the user whether the initial sync is in progress. - -```js -// Example of using hasSynced to show whether the first sync has completed - -// Tap into hasSynced -const [hasSynced, setHasSynced] = React.useState(powerSync.currentStatus?.hasSynced || false); - - React.useEffect(() => { - // Register listener for changes made to the powersync status - return powerSync.registerListener({ - statusChanged: (status) => { - setHasSynced(!!status.hasSynced); - } - }); - }, [powerSync]); - -return
{hasSynced ? 'Initial sync completed!' : 'Busy with initial sync...'}
; -``` - -For async use cases, see [PowerSyncDatabase.waitForFirstSync()](https://powersync-ja.github.io/powersync-js/web-sdk/classes/AbstractPowerSyncDatabase#waitforfirstsync), which returns a promise that resolves once the first full sync has completed (it queries the internal SQL [ps\_buckets](/architecture/client-architecture) table to determine if data has been synced). - -## Report sync download progress - -You can show users a progress bar when data downloads using the `downloadProgress` property from the -[SyncStatus](https://powersync-ja.github.io/powersync-js/web-sdk/classes/SyncStatus) class. This is especially useful for long-running initial syncs. `downloadProgress.downloadedFraction` gives you a value from 0.0 to 1.0 representing the total sync progress. - -Example (React, using [MUI](https://mui.com) components): - -```jsx -import { Box, LinearProgress, Stack, Typography } from '@mui/material'; -import { useStatus } from '@powersync/react'; -import { FC, ReactNode } from 'react'; - -export const SyncProgressBar: FC<{ priority?: number }> = ({ priority }) => { - const status = useStatus(); - const progressUntilNextSync = status.downloadProgress; - const progress = priority == null ? progressUntilNextSync : progressUntilNextSync?.untilPriority(priority); - - if (progress == null) { - return <>; - } - - return ( - - - - {progress.downloadedOperations == progress.totalOperations ? ( - Applying server-side changes - ) : ( - - Downloaded {progress.downloadedOperations} out of {progress.totalOperations}. - - )} - - - ); -}; -``` - -Also see: -- [SyncStatus API](https://powersync-ja.github.io/powersync-js/web-sdk/classes/SyncStatus) -- [Demo component](https://github.com/powersync-ja/powersync-js/blob/main/demos/react-supabase-todolist/src/components/widgets/GuardBySync.tsx) - -## Using PowerSyncDatabase Flags - -This guide provides an overview of the customizable flags available for the `PowerSyncDatabase` in the JavaScript Web SDK. These flags allow you to enable or disable specific features to suit your application's requirements. - -### Configuring Flags - -You can configure flags during the initialization of the `PowerSyncDatabase`. Flags can be set using the `flags` property, which allows you to enable or disable specific functionalities. - -```javascript -import { PowerSyncDatabase, resolveWebPowerSyncFlags, WebPowerSyncFlags } from '@powersync/web'; -import { AppSchema } from '@/library/powersync/AppSchema'; - -// Define custom flags -const customFlags: WebPowerSyncFlags = resolveWebPowerSyncFlags({ - enableMultiTabs: true, - broadcastLogs: true, - disableSSRWarning: false, - ssrMode: false, - useWebWorker: true, -}); - -// Create the PowerSync database instance -export const db = new PowerSyncDatabase({ - schema: AppSchema, - database: { - dbFilename: 'example.db', - }, - flags: customFlags, -}); -``` - -#### Available Flags - - default: `true` - - Enables support for multiple tabs using shared web workers. When enabled, multiple tabs can interact with the same database and sync data seamlessly. - - - - default: `false` - - Enables the broadcasting of logs for debugging purposes. This flag helps monitor shared worker logs in a multi-tab environment. - - - - default: `false` - - Disables warnings when running in SSR (Server-Side Rendering) mode. - - - - default: `false` - - Enables SSR mode. In this mode, only empty query results will be returned, and syncing with the backend is disabled. - - - - default: `true` - - Enables the use of web workers for database operations. Disabling this flag also disables multi-tab support. - - - -### Flag Behavior - -#### Example 1: Multi-Tab Support - -By default, multi-tab support is enabled if supported by the browser. To explicitly disable this feature: - -```javascript -export const db = new PowerSyncDatabase({ - schema: AppSchema, - database: { - dbFilename: 'my_app_db.sqlite', - }, - flags: { - enableMultiTabs: false, - }, -}); -``` - -When disabled, each tab will use independent workers, and changes in one tab will not automatically propagate to others. - -#### Example 2: SSR Mode - -To enable SSR mode and suppress warnings: - -```javascript -export const db = new PowerSyncDatabase({ - schema: AppSchema, - database: { - dbFilename: 'my_app_db.sqlite', - }, - flags: { - ssrMode: true, - disableSSRWarning: true, - }, -}); -``` - -#### Example 3: Verbose Debugging with Broadcast Logs - -To enable detailed logging for debugging: - -```javascript -export const db = new PowerSyncDatabase({ - schema: AppSchema, - database: { - dbFilename: 'my_app_db.sqlite', - }, - flags: { - broadcastLogs: true, - }, -}); -``` - -Logs will include detailed insights into database operations and synchronization. - -### Recommendations - -1. **Set `enableMultiTabs`** to `true` if your application requires seamless data sharing across multiple tabs. -2. **Set `useWebWorker`** to `true` for efficient database operations using web workers. -3. **Set `broadcastLogs`** to `true` during development to troubleshoot and monitor database and sync operations. -4. **Set `disableSSRWarning`** to `true` when running in SSR mode to avoid unnecessary console warnings. -5. **Test combinations** of flags to validate their behavior in your application's specific use case. \ No newline at end of file diff --git a/client-sdk-references/kotlin/encryption.mdx b/client-sdk-references/kotlin/encryption.mdx deleted file mode 100644 index 9399d3a9..00000000 --- a/client-sdk-references/kotlin/encryption.mdx +++ /dev/null @@ -1,4 +0,0 @@ ---- -title: Encryption -url: /usage/use-case-examples/data-encryption ---- diff --git a/client-sdk-references/kotlin/usage-examples.mdx b/client-sdk-references/kotlin/usage-examples.mdx deleted file mode 100644 index 0f97c218..00000000 --- a/client-sdk-references/kotlin/usage-examples.mdx +++ /dev/null @@ -1,207 +0,0 @@ ---- -title: "Usage Examples" -description: "Code snippets and guidelines for common scenarios" ---- - -import KotlinWatch from '/snippets/kotlin/basic-watch-query.mdx'; - -## Using transactions to group changes - -Use `writeTransaction` to group statements that can write to the database. - -```kotlin -database.writeTransaction { - database.execute( - sql = "DELETE FROM list WHERE id = ?", - parameters = listOf(listId) - ) - database.execute( - sql = "DELETE FROM todos WHERE list_id = ?", - parameters = listOf(listId) - ) -} -``` - -## Listen for changes in data - -Use the `watch` method to watch for changes to the dependent tables of any SQL query. - - - -## Insert, update, and delete data in the local database - -Use `execute` to run INSERT, UPDATE or DELETE queries. - -```kotlin -suspend fun updateCustomer(id: String, name: String, email: String) { - database.execute( - "UPDATE customers SET name = ? WHERE email = ?", - listOf(name, email) - ) -} -``` - -## Send changes in local data to your backend service - -Override `uploadData` to send local updates to your backend service. If you are using Supabase, see [SupabaseConnector.kt](https://github.com/powersync-ja/powersync-kotlin/blob/main/connectors/supabase/src/commonMain/kotlin/com/powersync/connector/supabase/SupabaseConnector.kt) for a complete implementation. - -```kotlin -/** - * This function is called whenever there is data to upload, whether the device is online or offline. - * If this call throws an error, it is retried periodically. - */ -override suspend fun uploadData(database: PowerSyncDatabase) { - - val transaction = database.getNextCrudTransaction() ?: return; - - var lastEntry: CrudEntry? = null; - try { - - for (entry in transaction.crud) { - lastEntry = entry; - - val table = supabaseClient.from(entry.table) - when (entry.op) { - UpdateType.PUT -> { - val data = entry.opData?.toMutableMap() ?: mutableMapOf() - data["id"] = entry.id - table.upsert(data) - } - - UpdateType.PATCH -> { - table.update(entry.opData!!) { - filter { - eq("id", entry.id) - } - } - } - - UpdateType.DELETE -> { - table.delete { - filter { - eq("id", entry.id) - } - } - } - } - } - - transaction.complete(null); - - } catch (e: Exception) { - println("Data upload error - retrying last entry: ${lastEntry!!}, $e") - throw e - } -} -``` - -## Accessing PowerSync connection status information - -```kotlin -// Intialize the DB -val db = remember { PowerSyncDatabase(factory, schema) } -// Get the status as a flow -val status = db.currentStatus.asFlow().collectAsState(initial = null) -// Use the emitted values from the flow e.g. to check if connected -val isConnected = status.value?.connected -``` - -## Wait for the initial sync to complete - -Use the `hasSynced` property and register a listener to indicate to the user whether the initial sync is in progress. - -```kotlin -val db = remember { PowerSyncDatabase(factory, schema) } -val status = db.currentStatus.asFlow().collectAsState(initial = null) -val hasSynced by remember { derivedStateOf { status.value?.hasSynced } } - -when { - hasSynced == null || hasSynced == false -> { - Box( - modifier = Modifier.fillMaxSize().background(MaterialTheme.colors.background), - contentAlignment = Alignment.Center - ) { - Text( - text = "Busy with initial sync...", - style = MaterialTheme.typography.h6 - ) - } - } - else -> { - ... show rest of UI -``` - -For async use cases, use `waitForFirstSync` method which is a suspense function that resolves once the first full sync has completed. - -## Report sync download progress - -You can show users a progress bar when data downloads using the `syncStatus.downloadProgress` property. This is especially useful for long-running initial syncs. `downloadProgress.downloadedFraction` gives a value from 0.0 to 1.0 representing the total sync progress. - -Example (Compose): - -```kotlin -import androidx.compose.foundation.background -import androidx.compose.foundation.layout.Arrangement -import androidx.compose.foundation.layout.Column -import androidx.compose.foundation.layout.fillMaxSize -import androidx.compose.foundation.layout.fillMaxWidth -import androidx.compose.foundation.layout.padding -import androidx.compose.material.LinearProgressIndicator -import androidx.compose.material.MaterialTheme -import androidx.compose.material.Text -import androidx.compose.runtime.Composable -import androidx.compose.runtime.getValue -import androidx.compose.ui.Alignment -import androidx.compose.ui.Modifier -import androidx.compose.ui.unit.dp -import com.powersync.PowerSyncDatabase -import com.powersync.bucket.BucketPriority -import com.powersync.compose.composeState - -/** - * Shows a progress bar while a sync is active. - * - * The [priority] parameter can be set to, instead of showing progress until the end of the entire - * sync, only show progress until data in the [BucketPriority] is synced. - */ -@Composable -fun SyncProgressBar( - db: PowerSyncDatabase, - priority: BucketPriority? = null, -) { - val state by db.currentStatus.composeState() - val progress = state.downloadProgress?.let { - if (priority == null) { - it - } else { - it.untilPriority(priority) - } - } - - if (progress == null) { - return - } - - Column( - modifier = Modifier.fillMaxSize().background(MaterialTheme.colors.background), - horizontalAlignment = Alignment.CenterHorizontally, - verticalArrangement = Arrangement.Center, - ) { - LinearProgressIndicator( - modifier = Modifier.fillMaxWidth().padding(8.dp), - progress = progress.fraction, - ) - - if (progress.downloadedOperations == progress.totalOperations) { - Text("Applying server-side changes...") - } else { - Text("Downloaded ${progress.downloadedOperations} out of ${progress.totalOperations}.") - } - } -} -``` - -Also see: -- [SyncDownloadProgress API](https://powersync-ja.github.io/powersync-kotlin/core/com.powersync.sync/-sync-download-progress/index.html) -- [Demo component](https://github.com/powersync-ja/powersync-kotlin/blob/main/demos/supabase-todolist/shared/src/commonMain/kotlin/com/powersync/demos/components/GuardBySync.kt) - diff --git a/client-sdk-references/node/javascript-orm-support.mdx b/client-sdk-references/node/javascript-orm-support.mdx deleted file mode 100644 index db027a91..00000000 --- a/client-sdk-references/node/javascript-orm-support.mdx +++ /dev/null @@ -1,5 +0,0 @@ ---- -title: "JavaScript ORM Support" -url: /client-sdk-references/javascript-web/javascript-orm -sidebarTitle: "ORM Support" ---- diff --git a/client-sdk-references/react-native-and-expo/api-reference.mdx b/client-sdk-references/react-native-and-expo/api-reference.mdx deleted file mode 100644 index 6c387837..00000000 --- a/client-sdk-references/react-native-and-expo/api-reference.mdx +++ /dev/null @@ -1,4 +0,0 @@ ---- -title: API Reference -url: https://powersync-ja.github.io/powersync-js/react-native-sdk ---- diff --git a/client-sdk-references/react-native-and-expo/encryption.mdx b/client-sdk-references/react-native-and-expo/encryption.mdx deleted file mode 100644 index 9399d3a9..00000000 --- a/client-sdk-references/react-native-and-expo/encryption.mdx +++ /dev/null @@ -1,4 +0,0 @@ ---- -title: Encryption -url: /usage/use-case-examples/data-encryption ---- diff --git a/client-sdk-references/react-native-and-expo/javascript-orm-support.mdx b/client-sdk-references/react-native-and-expo/javascript-orm-support.mdx deleted file mode 100644 index db027a91..00000000 --- a/client-sdk-references/react-native-and-expo/javascript-orm-support.mdx +++ /dev/null @@ -1,5 +0,0 @@ ---- -title: "JavaScript ORM Support" -url: /client-sdk-references/javascript-web/javascript-orm -sidebarTitle: "ORM Support" ---- diff --git a/client-sdk-references/react-native-and-expo/usage-examples.mdx b/client-sdk-references/react-native-and-expo/usage-examples.mdx deleted file mode 100644 index a268b4d0..00000000 --- a/client-sdk-references/react-native-and-expo/usage-examples.mdx +++ /dev/null @@ -1,247 +0,0 @@ ---- -title: "Usage Examples" -description: "Code snippets and guidelines for common scenarios" ---- - -import JavaScriptAsyncWatch from '/snippets/basic-watch-query-javascript-async.mdx'; -import JavaScriptCallbackWatch from '/snippets/basic-watch-query-javascript-callback.mdx'; - -## Using Hooks - -A separate `powersync-react` package is available containing React hooks for PowerSync: - - -See its README for example code. - -## Using transactions to group changes - -Read and write transactions present a context where multiple changes can be made then finally committed to the DB or rolled back. This ensures that either all the changes get persisted, or no change is made to the DB (in the case of a rollback or exception). - -[PowerSyncDatabase.writeTransaction(callback)](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/PowerSyncDatabase#writetransaction) automatically commits changes after the transaction callback is completed if [tx.rollback()](https://github.com/powersync-ja/powersync-js/blob/ed5bb49b5a1dc579050304fab847feb8d09b45c7/packages/common/src/db/DBAdapter.ts#L53) has not explicitly been called. If an exception is thrown in the callback then changes are automatically rolled back. - - -```js -// ListsWidget.jsx -import {Alert, Button, FlatList, Text, View} from 'react-native'; - -export const ListsWidget = () => { - // Populate lists with one of methods listed above - const [lists, setLists] = React.useState([]); - - return ( - - ({key: list.id, ...list}))} - renderItem={({item}) => ( - {item.name} - -); -``` - -## Send changes in local data to your backend service - -Override [uploadData](https://github.com/powersync-ja/powersync-js/blob/ed5bb49b5a1dc579050304fab847feb8d09b45c7/packages/common/src/client/connection/PowerSyncBackendConnector.ts#L24) to send local updates to your backend service. - -```js -// Implement the uploadData method in your backend connector -async function uploadData(database) { - const batch = await database.getCrudBatch(); - if (batch === null) return; - - for (const op of batch.crud) { - switch (op.op) { - case 'put': - // Send the data to your backend service - // replace `_myApi` with your own API client or service - await _myApi.put(op.table, op.opData); - break; - default: - // TODO: implement the other operations (patch, delete) - break; - } - } - - await batch.complete(); -} -``` - -## Accessing PowerSync connection status information - -Use [PowerSyncDatabase.connected](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/PowerSyncDatabase#connected) and register an event listener with [PowerSyncDatabase.registerListener](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/PowerSyncDatabase#registerlistener) to listen for status changes to your PowerSync instance. - -```js -// Example of using connected status to show online or offline - -// Tap into connected -const [connected, setConnected] = React.useState(powersync.connected); - -React.useEffect(() => { -// Register listener for changes made to the powersync status - return powersync.registerListener({ - statusChanged: (status) => { - setConnected(status.connected); - } - }); -}, [powersync]); - -// Icon to show connected or not connected to powersync -// as well as the last synced time - { - Alert.alert( - 'Status', - `${connected ? 'Connected' : 'Disconnected'}. \nLast Synced at ${powersync.currentStatus?.lastSyncedAt.toISOString() ?? '-' - }\nVersion: ${powersync.sdkVersion}` - ); - }} -/>; -``` - -## Wait for the initial sync to complete - -Use the [hasSynced](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/SyncStatus#hassynced) property (available since version 1.4.1 of the SDK) and register an event listener with [PowerSyncDatabase.registerListener](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/PowerSyncDatabase#registerlistener) to indicate to the user whether the initial sync is in progress. - -```js -// Example of using hasSynced to show whether the first sync has completed - -// Tap into hasSynced -const [hasSynced, setHasSynced] = React.useState(powerSync.currentStatus?.hasSynced || false); - - React.useEffect(() => { - // Register listener for changes made to the powersync status - return powerSync.registerListener({ - statusChanged: (status) => { - setHasSynced(!!status.hasSynced); - } - }); - }, [powerSync]); - -return {hasSynced ? 'Initial sync completed!' : 'Busy with initial sync...'}; -``` - -For async use cases, see [PowerSyncDatabase.waitForFirstSync](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/AbstractPowerSyncDatabase#waitforfirstsync), which returns a promise that resolves once the first full sync has completed (it queries the internal SQL [ps\_buckets](/architecture/client-architecture) table to determine if data has been synced). - -## Report sync download progress - -You can show users a progress bar when data downloads using the `downloadProgress` property from the [SyncStatus](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/SyncStatus) class. This is especially useful for long-running initial syncs. `downloadProgress.downloadedFraction` gives you a value from 0.0 to 1.0 representing the total sync progress. - -Example: - -```jsx -import { useStatus } from '@powersync/react'; -import { FC, ReactNode } from 'react'; -import { View } from 'react-native'; -import { Text, LinearProgress } from '@rneui/themed'; - -export const SyncProgressBar: FC<{ priority?: number }> = ({ priority }) => { - const status = useStatus(); - const progressUntilNextSync = status.downloadProgress; - const progress = priority == null ? progressUntilNextSync : progressUntilNextSync?.untilPriority(priority); - - if (progress == null) { - return <>; - } - - return ( - - - {progress.downloadedOperations == progress.totalOperations ? ( - Applying server-side changes - ) : ( - - Downloaded {progress.downloadedOperations} out of {progress.totalOperations}. - - )} - - ); -}; -``` - -Also see: -- [SyncStatus API](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/SyncStatus) -- [Demo component](https://github.com/powersync-ja/powersync-js/blob/main/demos/react-native-supabase-todolist/library/widgets/GuardBySync.tsx) - diff --git a/client-sdk-references/swift/usage-examples.mdx b/client-sdk-references/swift/usage-examples.mdx deleted file mode 100644 index 5edd858a..00000000 --- a/client-sdk-references/swift/usage-examples.mdx +++ /dev/null @@ -1,180 +0,0 @@ ---- -title: "Usage Examples" -description: "Code snippets and guidelines for common scenarios in Swift" ---- - -import SwiftWatch from '/snippets/swift/basic-watch-query.mdx'; - -## Using transactions to group changes - -Read and write transactions present a context where multiple changes can be made then finally committed to the DB or rolled back. This ensures that either all the changes get persisted, or no change is made to the DB (in the case of a rollback or exception). - -```swift -// Delete a list and its todos in a transaction -func deleteList(db: PowerSyncDatabase, listId: String) async throws { - try await db.writeTransaction { tx in - try await tx.execute(sql: "DELETE FROM lists WHERE id = ?", parameters: [listId]) - try await tx.execute(sql: "DELETE FROM todos WHERE list_id = ?", parameters: [listId]) - } -} -``` - -Also see [`readTransaction`](https://powersync-ja.github.io/powersync-swift/documentation/powersync/queries/readtransaction(callback:)). - -## Listen for changes in data - -Use `watch` to watch for changes to the dependent tables of any SQL query. - - - -## Insert, update, and delete data in the local database - -Use `execute` to run INSERT, UPDATE or DELETE queries. - -```swift -// Insert a new TODO -func insertTodo(_ todo: NewTodo, _ listId: String) async throws { - try await db.execute( - sql: "INSERT INTO \(TODOS_TABLE) (id, created_at, created_by, description, list_id, completed) VALUES (uuid(), datetime(), ?, ?, ?, ?)", - parameters: [connector.currentUserID, todo.description, listId, todo.isComplete] - ) -} -``` - -## Send changes in local data to your backend service - -Override `uploadData` to send local updates to your backend service. - -```swift -class MyConnector: PowerSyncBackendConnector { - override func uploadData(database: PowerSyncDatabaseProtocol) async throws { - let batch = try await database.getCrudBatch() - guard let batch = batch else { return } - for entry in batch.crud { - switch entry.op { - case .put: - // Send the data to your backend service - // Replace `_myApi` with your own API client or service - try await _myApi.put(table: entry.table, data: entry.opData) - default: - // TODO: implement the other operations (patch, delete) - break - } - } - try await batch.complete(writeCheckpoint: nil) - } -} -``` - -## Accessing PowerSync connection status information - -Use [`currentStatus`](https://powersync-ja.github.io/powersync-swift/documentation/powersync/powersyncdatabaseprotocol/currentstatus) and observe changes to listen for status changes to your PowerSync instance. - -```swift -import Foundation -import SwiftUI -import PowerSync - -struct PowerSyncConnectionIndicator: View { - private let powersync: any PowerSyncDatabaseProtocol - @State private var connected: Bool = false - - init(powersync: any PowerSyncDatabaseProtocol) { - self.powersync = powersync - } - - var body: some View { - let iconName = connected ? "wifi" : "wifi.slash" - let description = connected ? "Online" : "Offline" - - Image(systemName: iconName) - .accessibility(label: Text(description)) - .task { - self.connected = powersync.currentStatus.connected - - for await status in powersync.currentStatus.asFlow() { - self.connected = status.connected - } - } - } -} -``` - -## Wait for the initial sync to complete - -Use the `hasSynced` property and observe status changes to indicate to the user whether the initial sync is in progress. - -```swift -struct WaitForFirstSync: View { - private let powersync: any PowerSyncDatabaseProtocol - @State var didSync: Bool = false - - init(powersync: any PowerSyncDatabaseProtocol) { - self.powersync = powersync - } - - var body: some View { - if !didSync { - ProgressView().task { - do { - try await powersync.waitForFirstSync() - } catch { - // TODO: Handle errors - } - } - } - } -} -``` - -For async use cases, use [`waitForFirstSync`](https://powersync-ja.github.io/powersync-swift/documentation/powersync/powersyncdatabaseprotocol/waitforfirstsync()). - -## Report sync download progress - -You can show users a progress bar when data downloads using the `downloadProgress` property from the [`SyncStatusData`](https://powersync-ja.github.io/powersync-swift/documentation/powersync/syncstatusdata/) object. `downloadProgress.downloadedFraction` gives you a value from 0.0 to 1.0 representing the total sync progress. This is especially useful for long-running initial syncs. - -Example: - -```swift -struct SyncProgressIndicator: View { - private let powersync: any PowerSyncDatabaseProtocol - private let priority: BucketPriority? - @State private var status: SyncStatusData? = nil - - init(powersync: any PowerSyncDatabaseProtocol, priority: BucketPriority? = nil) { - self.powersync = powersync - self.priority = priority - } - - var body: some View { - VStack { - if let totalProgress = status?.downloadProgress { - let progress = if let priority = self.priority { - totalProgress.untilPriority(priority: priority) - } else { - totalProgress - } - - ProgressView(value: progress.fraction) - - if progress.downloadedOperations == progress.totalOperations { - Text("Applying server-side changes...") - } else { - Text("Downloaded \(progress.downloadedOperations) out of \(progress.totalOperations)") - } - } - }.task { - status = powersync.currentStatus - for await status in powersync.currentStatus.asFlow() { - self.status = status - } - } - } -} -``` - -Also see: -- [SyncStatusData API](https://powersync-ja.github.io/powersync-swift/documentation/powersync/syncstatusdata/) -- [SyncDownloadProgress API](https://powersync-ja.github.io/powersync-swift/documentation/powersync/syncdownloadprogress/) -- [Demo component](https://github.com/powersync-ja/powersync-swift/blob/main/Demo/PowerSyncExample/Components/ListView.swift) - diff --git a/tutorials/client/attachments-and-files/aws-s3-storage-adapter.mdx b/client-sdks/advanced/attachments-aws-s3-storage.mdx similarity index 99% rename from tutorials/client/attachments-and-files/aws-s3-storage-adapter.mdx rename to client-sdks/advanced/attachments-aws-s3-storage.mdx index 22bd8a0e..65d50b9b 100644 --- a/tutorials/client/attachments-and-files/aws-s3-storage-adapter.mdx +++ b/client-sdks/advanced/attachments-aws-s3-storage.mdx @@ -1,6 +1,6 @@ --- title: "Use AWS S3 for attachment storage" -description: "In this tutorial, we will show you how to replace Supabase Storage with AWS S3 for handling attachments in the [React Native To-Do List demo app](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-native-supabase-todolist)." +description: "Replace Supabase Storage with AWS S3 for handling attachments in the [React Native To-Do List demo app](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-native-supabase-todolist)." sidebarTitle: "AWS S3 Storage" --- diff --git a/tutorials/client/attachments-and-files/pdf-attachment.mdx b/client-sdks/advanced/attachments-pdfs.mdx similarity index 97% rename from tutorials/client/attachments-and-files/pdf-attachment.mdx rename to client-sdks/advanced/attachments-pdfs.mdx index 44dc1b7b..6f576e71 100644 --- a/tutorials/client/attachments-and-files/pdf-attachment.mdx +++ b/client-sdks/advanced/attachments-pdfs.mdx @@ -1,6 +1,6 @@ --- -title: "PDF attachments" -description: "In this tutorial we will show you how to modify the [PhotoAttachmentQueue](https://github.com/powersync-ja/powersync-js/blob/main/demos/react-native-supabase-todolist/library/powersync/PhotoAttachmentQueue.ts) for PDF attachments." +title: "PDF Attachments" +description: "Learn how to modify the [PhotoAttachmentQueue](https://github.com/powersync-ja/powersync-js/blob/main/demos/react-native-supabase-todolist/library/powersync/PhotoAttachmentQueue.ts) for PDF attachments." keywords: ["pdf", "attachment", "storage"] --- @@ -20,7 +20,7 @@ An overview of the required changes are: - Clone the [To-Do List demo app](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-native-supabase-todolist) repo - Follow the instructions in the [README](https://github.com/powersync-ja/powersync-js/blob/main/demos/react-native-supabase-todolist/README.md) and ensure that the app runs locally - A running PowerSync Service and Supabase (can be self-hosted) - - [Storage configuration in Supabase](/integration-guides/supabase-+-powersync/handling-attachments#configure-storage-in-supabase) + - [Storage configuration in Supabase](/integrations/supabase/attachments#configure-storage-in-supabase) # Steps diff --git a/usage/use-case-examples/attachments-files.mdx b/client-sdks/advanced/attachments.mdx similarity index 92% rename from usage/use-case-examples/attachments-files.mdx rename to client-sdks/advanced/attachments.mdx index 6690dd48..a7a954aa 100644 --- a/usage/use-case-examples/attachments-files.mdx +++ b/client-sdks/advanced/attachments.mdx @@ -19,5 +19,5 @@ We currently have these helper packages available to manage attachments: | **Swift** | [attachments](https://github.com/powersync-ja/powersync-swift/blob/main/Sources/PowerSync/attachments/README.md) | [To-Do List demo app](https://github.com/powersync-ja/powersync-swift/tree/main/Demos/PowerSyncExample) | The example implementations above use [Supabase Storage](https://supabase.com/docs/guides/storage) as storage provider. -* For more information on the use of Supabase as the storage provider, refer to [Handling Attachments](/integration-guides/supabase-+-powersync/handling-attachments) -* To learn how to adapt the implementations to use AWS S3 as the storage provider, see [this tutorial](/tutorials/client/attachments-and-files/aws-s3-storage-adapter) +* For more information on the use of Supabase as the storage provider, refer to [Handling Attachments](/integrations/supabase/attachments) +* To learn how to adapt the implementations to use AWS S3 as the storage provider, see [this tutorial](/client-sdks/advanced/attachments-aws-s3-storage) diff --git a/usage/use-case-examples/background-syncing.mdx b/client-sdks/advanced/background-syncing.mdx similarity index 100% rename from usage/use-case-examples/background-syncing.mdx rename to client-sdks/advanced/background-syncing.mdx diff --git a/usage/use-case-examples/crdts.mdx b/client-sdks/advanced/crdts.mdx similarity index 59% rename from usage/use-case-examples/crdts.mdx rename to client-sdks/advanced/crdts.mdx index 0560d380..9f5eba0b 100644 --- a/usage/use-case-examples/crdts.mdx +++ b/client-sdks/advanced/crdts.mdx @@ -1,6 +1,6 @@ --- -title: "CRDTs" -description: "While PowerSync does not use [CRDTs](https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type) directly as part of its sync or conflict resolution process, CRDT data (from a library such as [Yjs](https://github.com/yjs/yjs) or y-crdt) may be persisted and synced using PowerSync." +title: "CRDT Data Structures" +description: "PowerSync does not use [CRDTs](https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type) directly as part of its sync or conflict resolution process, but CRDT data structures (from a library such as [Yjs](https://github.com/yjs/yjs) or y-crdt) may be persisted and synced using PowerSync." --- This may be useful for cases such as document editing, where last-write-wins is not sufficient for conflict resolution. PowerSync becomes the provider for CRDT data — both for local storage and for propagating changes to other clients. diff --git a/usage/use-case-examples/custom-types-arrays-and-json.mdx b/client-sdks/advanced/custom-types-arrays-and-json.mdx similarity index 99% rename from usage/use-case-examples/custom-types-arrays-and-json.mdx rename to client-sdks/advanced/custom-types-arrays-and-json.mdx index 9dd96d26..30916e7c 100644 --- a/usage/use-case-examples/custom-types-arrays-and-json.mdx +++ b/client-sdks/advanced/custom-types-arrays-and-json.mdx @@ -207,7 +207,7 @@ bucket_definitions: ``` -See these additional details when using the `IN` operator: [Operators](/usage/sync-rules/operators-and-functions#operators) +See these additional details when using the `IN` operator: [Operators](/sync/operators-and-functions#operators) ### Client SDK @@ -347,13 +347,13 @@ You can write the entire updated column value as a string, or, with `trackPrevio - + **Attention Supabase users:** Supabase can handle writes with arrays, but you must convert from string to array using `jsonDecode` in the connector's `uploadData` function. The default implementation of `uploadData` does not handle complex types like arrays automatically. - + ## Custom Types -PowerSync serializes custom types as text. For details, see [types in sync rules](/usage/sync-rules/types). +PowerSync serializes custom types as text. For details, see [types in sync rules](/sync/types). ### Postgres @@ -371,7 +371,7 @@ create type location_address AS ( ### Sync Rules Custom type columns are converted to text by the PowerSync Service. -Depending on whether the `custom_postgres_types` [compatibility option](/usage/sync-rules/compatibility) is enabled, +Depending on whether the `custom_postgres_types` [compatibility option](/sync/advanced/compatibility) is enabled, PowerSync would sync the row as: - `{"street":"1000 S Colorado Blvd.","city":"Denver","state":"CO","zip":80211}` if the option is enabled. diff --git a/usage/use-case-examples/data-encryption.mdx b/client-sdks/advanced/data-encryption.mdx similarity index 81% rename from usage/use-case-examples/data-encryption.mdx rename to client-sdks/advanced/data-encryption.mdx index 2fc1fb09..ef7eec01 100644 --- a/usage/use-case-examples/data-encryption.mdx +++ b/client-sdks/advanced/data-encryption.mdx @@ -4,7 +4,7 @@ title: "Data Encryption" ### In Transit Encryption -Data is always encrypted in transit using TLS — both between the client and PowerSync, and between PowerSync [and the source database](/usage/lifecycle-maintenance/postgres-maintenance#tls). +Data is always encrypted in transit using TLS — both between the client and PowerSync, and between PowerSync [and the source database](/configuration/source-db/postgres-maintenance#tls). ### At Rest Encryption @@ -45,7 +45,7 @@ The client-side database can be encrypted at rest. This is currently available f -Encryption support is available for PowerSync's Node.js SDK using [`better-sqlite3-multiple-ciphers`](https://www.npmjs.com/package/better-sqlite3-multiple-ciphers). See usage details and code examples in the [Node.js SDK reference](/client-sdk-references/node#encryption-and-custom-sqlite-drivers). +Encryption support is available for PowerSync's Node.js SDK using [`better-sqlite3-multiple-ciphers`](https://www.npmjs.com/package/better-sqlite3-multiple-ciphers). See usage details and code examples in the [Node.js SDK reference](/client-sdks/reference/node#encryption-and-custom-sqlite-drivers). @@ -63,10 +63,10 @@ For implementation details see [`sqlite3multipleciphers`](https://github.com/pow For end-to-end encryption, the encrypted data can be synced using PowerSync. The data can then either be encrypted and decrypted directly in memory by the application, or a separate local-only table can be used to persist the decrypted data — allowing querying the data directly. -[Raw SQLite Tables](/usage/use-case-examples/raw-tables) can be used for full control over the SQLite schema and managing tables for the decrypted data. We have a [React & Supabase example app](https://github.com/powersync-community/react-supabase-chat-e2ee) that demonstrates this approach. See also the accompanying [blog post](https://www.powersync.com/blog/building-an-e2ee-chat-app-with-powersync-supabase). +[Raw SQLite Tables](/client-sdks/advanced/raw-tables) can be used for full control over the SQLite schema and managing tables for the decrypted data. We have a [React & Supabase example app](https://github.com/powersync-community/react-supabase-chat-e2ee) that demonstrates this approach. See also the accompanying [blog post](https://www.powersync.com/blog/building-an-e2ee-chat-app-with-powersync-supabase). ## See Also -* Database Setup → [Security & IP Filtering](/installation/database-setup/security-and-ip-filtering) +* Database Setup → [Security & IP Filtering](/configuration/source-db/security-and-ip-filtering) * Resources → [Security](/resources/security) \ No newline at end of file diff --git a/usage/use-case-examples/postgis.mdx b/client-sdks/advanced/gis-data-postgis.mdx similarity index 90% rename from usage/use-case-examples/postgis.mdx rename to client-sdks/advanced/gis-data-postgis.mdx index 35de764b..7c5a9119 100644 --- a/usage/use-case-examples/postgis.mdx +++ b/client-sdks/advanced/gis-data-postgis.mdx @@ -1,13 +1,9 @@ --- -title: "PostGIS" -description: "Custom types, arrays and [PostGIS](https://postgis.net/) are frequently presented together since geospatial data is often complex and multidimensional." +title: "GIS Data: PostGIS" +description: "For Postgres, PowerSync integrates well with PostGIS and provides tools for working with geo data." --- -## Overview - -It's therefore recommend to first quickly scan the content in [Custom Types, Arrays and JSON](/usage/use-case-examples/custom-types-arrays-and-json) - -PowerSync integrates well with PostGIS and provides tools for working with geo data. +Custom types, arrays and [PostGIS](https://postgis.net/) are frequently presented together since geospatial data is often complex and multidimensional. It's therefore recommend to first quickly scan the content in [Custom Types, Arrays and JSON](/client-sdks/advanced/custom-types-arrays-and-json) ### PostGIS @@ -21,7 +17,7 @@ The `geography` and `geometry` types are now available in your Postgres. ## Supabase Configuration Example: -This example builds on the To-Do List demo app in our [Supabase integration guide](/integration-guides/supabase-+-powersync). +This example builds on the To-Do List demo app in our [Supabase integration guide](/integrations/supabase/guide). ### Add custom type, array and PostGIS columns to the `todos` table @@ -119,7 +115,7 @@ The data looks exactly how it’s stored in the Postgres database i.e. Example use case: Extract x (long) and y (lat) values from a PostGIS type, to use these values independently in an application. -Currently, PowerSync supports the following functions that can be used when selecting data in your Sync Rules: [Operators and Functions](/usage/sync-rules/operators-and-functions) +Currently, PowerSync supports the following functions that can be used when selecting data in your Sync Rules: [Operators and Functions](/sync/operators-and-functions) 1. `ST_AsGeoJSON` 2. `ST_AsText` diff --git a/usage/use-case-examples/offline-only-usage.mdx b/client-sdks/advanced/local-only-usage.mdx similarity index 94% rename from usage/use-case-examples/offline-only-usage.mdx rename to client-sdks/advanced/local-only-usage.mdx index 96a6c32a..f0174ddf 100644 --- a/usage/use-case-examples/offline-only-usage.mdx +++ b/client-sdks/advanced/local-only-usage.mdx @@ -1,5 +1,5 @@ --- -title: "Local-only Usage" +title: "Local-Only Usage" description: "Some use cases require data persistence before the user has registered or signed in." --- @@ -21,7 +21,7 @@ final table = Table.localOnly( ) ``` -**Flutter + Drift users:** If you're using local-only tables with `viewName` overrides, Drift's watch streams may not update correctly. See the [troubleshooting guide](/client-sdk-references/flutter/flutter-orm-support#troubleshooting:-watch-streams-with-local-only-tables) for the solution. +**Flutter + Drift users:** If you're using local-only tables with `viewName` overrides, Drift's watch streams may not update correctly. See the [troubleshooting guide](/client-sdks/orms/flutter-orm-support#troubleshooting:-watch-streams-with-local-only-tables) for the solution. diff --git a/usage/use-case-examples/pre-seeded-sqlite.mdx b/client-sdks/advanced/pre-seeded-sqlite.mdx similarity index 97% rename from usage/use-case-examples/pre-seeded-sqlite.mdx rename to client-sdks/advanced/pre-seeded-sqlite.mdx index 6f12154b..a36fa8f5 100644 --- a/usage/use-case-examples/pre-seeded-sqlite.mdx +++ b/client-sdks/advanced/pre-seeded-sqlite.mdx @@ -7,12 +7,13 @@ description: "Optimizing Initial Sync by Pre-Seeding SQLite Databases." When syncing large amounts of data to connected clients, it can be useful to pre-seed the SQLite database with an initial snapshot of the data. This can help to reduce the initial sync time and improve the user experience. -To achieve this, you can run server-side processes using the [PowerSync Node.js SDK](/client-sdk-references/node) to pre-seed SQLite files. These SQLite files can then be uploaded to blob storage providers such as AWS S3, Azure Blob Storage, or Google Cloud Storage and downloaded directly by client applications. Client applications can then initialize the pre-seeded SQLite file, effectively bypassing the initial sync process. +To achieve this, you can run server-side processes using the [PowerSync Node.js SDK](/client-sdks/reference/node) to pre-seed SQLite files. These SQLite files can then be uploaded to blob storage providers such as AWS S3, Azure Blob Storage, or Google Cloud Storage and downloaded directly by client applications. Client applications can then initialize the pre-seeded SQLite file, effectively bypassing the initial sync process. ## Demo App If you're interested in seeing an end-to-end example, we've prepared a demo repo that can be used as a template for your own implementation. This repo covers all of the key concepts and code examples shown in this page. - + + Self-hosted PowerSync instance connected to a PostgreSQL database, using the PowerSync Node.js SDK, React Native SDK and AWS S3 for storing the pre-seeded SQLite files. diff --git a/usage/use-case-examples/query-json-in-sqlite.mdx b/client-sdks/advanced/query-json-in-sqlite.mdx similarity index 95% rename from usage/use-case-examples/query-json-in-sqlite.mdx rename to client-sdks/advanced/query-json-in-sqlite.mdx index 9dbfcaf3..829addf4 100644 --- a/usage/use-case-examples/query-json-in-sqlite.mdx +++ b/client-sdks/advanced/query-json-in-sqlite.mdx @@ -5,17 +5,17 @@ description: "How to query JSON data synced from your backend and stored as stri # Overview -When syncing data from your source backend database to PowerSync, JSON columns (whether from MongoDB documents, PostgreSQL JSONB columns, or other JSON data types) are stored as `TEXT` in SQLite. See the [sync rule type mapping guide](/usage/sync-rules/types#types) for more details. This guide shows you how to effectively query and filter JSON data using SQLite's powerful JSON functions on the client. +When syncing data from your backend source database to PowerSync, JSON columns (whether from MongoDB documents, PostgreSQL JSONB columns, or other JSON data types) are stored as `TEXT` in SQLite. See the [type mapping guide](/sync/types) for more details. This guide shows you how to effectively query and filter JSON data using SQLite's powerful JSON functions on the client. ## Understanding JSON Storage in PowerSync -Your backend database might store structured data as JSON in various ways: +Your backend source database might store structured data as JSON in various ways: - **MongoDB**: Nested documents and arrays - **PostgreSQL**: JSONB, JSON, array, or custom types - **MySQL**: JSON columns - **SQL Server**: JSON columns -Regardless of the source, PowerSync syncs these JSON structures to SQLite as `TEXT` columns. On the client side, you can query this data using SQLite's built-in JSON functions without needing to parse it yourself. Learn more about [how PowerSync handles JSON, arrays, and custom types](/usage/use-case-examples/custom-types-arrays-and-json#javascript). +Regardless of the source, PowerSync syncs these JSON structures to SQLite as `TEXT` columns. On the client side, you can query this data using SQLite's built-in JSON functions without needing to parse it yourself. Learn more about [how PowerSync handles JSON, arrays, and custom types](/client-sdks/advanced/custom-types-arrays-and-json#javascript). ## Example Data Structure @@ -45,7 +45,7 @@ Let's use a task management system where tasks have nested metadata: } ``` -In SQLite, the `assignees`, `tags`, and `metadata` columns are stored as JSON strings. For details on how different backend types map to SQLite, see [database types and mapping](/usage/sync-rules/types). +In SQLite, the `assignees`, `tags`, and `metadata` columns are stored as JSON strings. For details on how different backend types map to SQLite, see [database types and mapping](/sync/types). ## JSON Extraction Basics diff --git a/usage/use-case-examples/raw-tables.mdx b/client-sdks/advanced/raw-tables.mdx similarity index 93% rename from usage/use-case-examples/raw-tables.mdx rename to client-sdks/advanced/raw-tables.mdx index fa5d2718..6526e8a8 100644 --- a/usage/use-case-examples/raw-tables.mdx +++ b/client-sdks/advanced/raw-tables.mdx @@ -49,7 +49,7 @@ Consider raw tables when you need: Currently the sync system involves two general steps: -1. Download sync bucket operations from the PowerSync Service +1. Download bucket operations from the PowerSync Service 2. Once the client has a complete checkpoint and no pending local changes in the upload queue, sync the local database with the bucket operations The bucket operations use JSON to store the individual operation data. The local database uses tables with a simple schemaless `ps_data__` structure containing only an `id` (TEXT) and `data` (JSON) column. @@ -96,12 +96,12 @@ To reference the ID or extract values, prepared statements with parameters are u ```JavaScript const mySchema = new Schema({ - // Define your PowerSync-managed schema here - // ... + // Define your PowerSync-managed schema here + // ... }); mySchema.withRawTables({ // The name here doesn't have to match the name of the table in SQL. Instead, it's used to match - // the table name from the backend database as sent by the PowerSync service. + // the table name from the backend source database as sent by the PowerSync Service. todo_lists: { put: { sql: 'INSERT OR REPLACE INTO todo_lists (id, created_by, title, content) VALUES (?, ?, ?, ?)', @@ -118,13 +118,13 @@ To reference the ID or extract values, prepared statements with parameters are u We will simplify this API after understanding the use-cases for raw tables better. - Raw tables are not part of the regular tables list and can be defined with the optional `rawTables` parameter. + Raw tables are not part of the regular tables list and can be defined with the optional `rawTables` parameter. - ```dart + ```dart final schema = Schema(const [], rawTables: const [ RawTable( // The name here doesn't have to match the name of the table in SQL. Instead, it's used to match - // the table name from the backend database as sent by the PowerSync service. + // the table name from the backend source database as sent by the PowerSync Service. name: 'todo_lists', put: PendingStatement( sql: 'INSERT OR REPLACE INTO todo_lists (id, created_by, title, content) VALUES (?, ?, ?, ?)', @@ -146,10 +146,10 @@ To reference the ID or extract values, prepared statements with parameters are u ``` - To define a raw table, include it in the list of tables passed to the `Schema`: + To define a raw table, include it in the list of tables passed to the `Schema`: - ```Kotlin - val schema = Schema(listOf( + ```Kotlin + val schema = Schema(listOf( RawTable( // The name here doesn't have to match the name of the table in SQL. Instead, it's used to match // the table name from the backend database as sent by the PowerSync service. @@ -191,8 +191,6 @@ To reference the ID or extract values, prepared statements with parameters are u Unfortunately, raw tables are not available in the .NET SDK yet. - - --- @@ -264,8 +262,10 @@ CREATE TRIGGER todo_lists_delete Raw tables support advanced table constraints including foreign keys. When enabling foreign keys however, you need to be aware of the following: -1. While PowerSync will always apply synced data in a transaction, there is no way to control the order in which rows get applied. For this reason, foreign keys need to be configured with `DEFERRABLE INITIALLY DEFERRED`. -2. When using [stream priorities](/usage/use-case-examples/prioritized-sync), you need to ensure you don't have foreign keys from high-priority rows to lower-priority data. PowerSync applies data in one transaction per priority, so these foreign keys would not work. +1. While PowerSync will always apply synced data in a transaction, there is no way to control the order in which rows get applied. + For this reason, foreign keys need to be configured with `DEFERRABLE INITIALLY DEFERRED`. +2. When using [stream priorities](/sync/advanced/prioritized-sync), you need to ensure you don't have foreign keys from high-priority + rows to lower-priority data. PowerSync applies data in one transaction per priority, so these foreign keys would not work. 3. As usual when using foreign keys, note that they need to be explicitly enabled with `pragma foreign_keys = on`. ## Migrations diff --git a/tutorials/client/data/sequential-id-mapping.mdx b/client-sdks/advanced/sequential-id-mapping.mdx similarity index 95% rename from tutorials/client/data/sequential-id-mapping.mdx rename to client-sdks/advanced/sequential-id-mapping.mdx index 19f018a0..a27fc462 100644 --- a/tutorials/client/data/sequential-id-mapping.mdx +++ b/client-sdks/advanced/sequential-id-mapping.mdx @@ -1,13 +1,13 @@ --- title: Sequential ID Mapping -description: In this tutorial we will show you how to map a local UUID to a remote sequential (auto-incrementing) ID. +description: Learn how to map a local UUID to a remote sequential (auto-incrementing) ID. sidebarTitle: Sequential ID Mapping keywords: ["data", "uuid", "map", "auto increment", "id", "sequential id"] --- -# Introduction -When auto-incrementing / sequential IDs are used on the backend database, the ID can only be generated on the backend database, and not on the client while offline. -To handle this, you can use a secondary UUID on the client, then map it to a sequential ID when performing an update on the backend database. +## Introduction +When auto-incrementing / sequential IDs are used on the backend source database, the ID can only be generated on the backend source database, and not on the client while offline. +To handle this, you can use a secondary UUID on the client, then map it to a sequential ID when performing an update on the backend source database. This allows using a sequential primary key for each record, with a UUID as a secondary ID. @@ -15,7 +15,7 @@ This allows using a sequential primary key for each record, with a UUID as a sec To illustrate this, we will use the [React To-Do List demo app](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-supabase-todolist) and modify it to use UUIDs -on the client and map them to sequential IDs on the backend database (Supabase in this case). +on the client and map them to sequential IDs on the backend source database (Supabase in this case). ### Overview Before we get started, let's outline the changes we will have to make: @@ -46,7 +46,7 @@ Before we get started, let's outline the changes we will have to make: -# Schema +## Schema In order to map the UUID to the integer ID, we need to update the - `lists` table by adding a `uuid` column, which will be the secondary ID, and @@ -87,7 +87,7 @@ In order to map the UUID to the integer ID, we need to update the With the schema updated, we now need a method to synchronize and map the `list_id` and `list_uuid` in the `todos` table, with the `id` and `uuid` columns in the `lists` table. We can achieve this by creating SQL triggers. -# Create SQL Triggers +## Create SQL Triggers We need to create triggers that can look up the integer ID for the given UUID and vice versa. These triggers will maintain consistency between `list_id` and `list_uuid` in the `todos` table by ensuring that they remain synchronized with the `id` and `uuid` columns in the `lists` table; @@ -185,9 +185,9 @@ We will create the following two triggers that cover either scenario of updating We now have triggers in place that will handle the mapping for our updated schema and can move on to updating the Sync Rules to use the UUID column instead of the integer ID. -# Update Sync Rules +## Update Sync Rules -As sequential IDs can only be created on the backend database, we need to use UUIDs in the client. This can be done by updating both the `parameters` and `data` queries to use the new `uuid` columns. +As sequential IDs can only be created on the backend source database, we need to use UUIDs in the client. This can be done by updating both the `parameters` and `data` queries to use the new `uuid` columns. The `parameters` query is updated by removing the `list_id` alias (this is removed to avoid any confusion between the `list_id` column in the `todos` table), and the `data` query is updated to use the `uuid` column as the `id` column for the `lists` and `todos` tables. We also explicitly define which columns to select, as `list_id` is no longer required in the client. @@ -204,7 +204,7 @@ bucket_definitions: With the Sync Rules updated, we can now move on to updating the client to use UUIDs. -# Update Client to Use UUIDs +## Update Client to Use UUIDs With our Sync Rules updated, we no longer have the `list_id` column in the `todos` table. We start by updating `AppSchema.ts` and replacing `list_id` with `list_uuid` in the `todos` table. diff --git a/client-sdk-references/flutter/state-management.mdx b/client-sdks/advanced/state-management.mdx similarity index 93% rename from client-sdk-references/flutter/state-management.mdx rename to client-sdks/advanced/state-management.mdx index 976755ea..59bf1352 100644 --- a/client-sdk-references/flutter/state-management.mdx +++ b/client-sdks/advanced/state-management.mdx @@ -1,9 +1,13 @@ --- -title: "State Management" -description: "Guidance on using PowerSync with popular Flutter state management libraries." +title: "State Management Libraries" +description: "Guidance on using PowerSync with popular state management libraries in Dart/Flutter." --- -Our [demo apps](/resources/demo-apps-example-projects) for Flutter are intentionally kept simple to put a focus on demonstrating + + This section is currently only relevant for the Dart/Flutter SDK and may be expanded to cover other SDKs in the future. + + +Our [demo apps](/intro/examples) for Flutter are intentionally kept simple to put a focus on demonstrating PowerSync APIs. Instead of using heavy state management solutions, they use simple global fields to make the PowerSync database accessible to widgets. When adopting PowerSync, you might be interested in using a more sophisticated approach for state management. @@ -24,7 +28,7 @@ in the Dart ecosystem, PowerSync works well with all popular approaches for stat to `close()` the database in the `dispose` callback. 4. The BLoC pattern with the `bloc` package: You can easily listen to watched queries in Cubits (although, if you find your Blocs and Cubits becoming trivial wrappers around database streams, consider just `watch()`ing database queries in widgets directly. - That doesn't make your app [less testable](/client-sdk-references/flutter/unit-testing)!). + That doesn't make your app [less testable](/client-sdks/advanced/unit-testing)!). To simplify state management, avoid the use of hydrated blocs and cubits for state that depends on database queries. With PowerSync, regular data is already available locally and doesn't need a second local cache. diff --git a/client-sdk-references/flutter/unit-testing.mdx b/client-sdks/advanced/unit-testing.mdx similarity index 93% rename from client-sdk-references/flutter/unit-testing.mdx rename to client-sdks/advanced/unit-testing.mdx index 66c24b1e..892c43dd 100644 --- a/client-sdk-references/flutter/unit-testing.mdx +++ b/client-sdks/advanced/unit-testing.mdx @@ -1,8 +1,12 @@ --- title: "Unit Testing" -description: "Guidelines for unit testing with PowerSync" +description: "Guidelines for unit testing with PowerSync in Dart/Flutter." --- + + This section is currently only relevant for the Dart/Flutter SDK and may be expanded to cover other SDKs in the future. + + For unit-testing your projects using PowerSync (e.g. testing whether your queries run as expected) you will need the `powersync-sqlite-core` binary in your project's root directory. diff --git a/client-sdks/api-references.mdx b/client-sdks/api-references.mdx new file mode 100644 index 00000000..0f8046ec --- /dev/null +++ b/client-sdks/api-references.mdx @@ -0,0 +1,27 @@ +--- +title: "SDK API References" +description: "API references for PowerSync Client SDKs" +--- + +Links to all the API references for the client SDKs: + + + + + + + + + + + + + + + + + + A full API Reference for this SDK is not yet available. This is planned for a future release. + + + diff --git a/tutorials/client/data/cascading-delete.mdx b/client-sdks/cascading-delete.mdx similarity index 89% rename from tutorials/client/data/cascading-delete.mdx rename to client-sdks/cascading-delete.mdx index 409388b7..b5633d0e 100644 --- a/tutorials/client/data/cascading-delete.mdx +++ b/client-sdks/cascading-delete.mdx @@ -1,10 +1,10 @@ --- title: "Cascading Delete" -description: "In this tutorial we will show you how to perform a cascading delete on the client." +description: "Learn how to perform a cascading delete on the client." keywords: ["data", "cascade", "delete"] --- -# Introduction +## Introduction Since PowerSync utilizes SQLite views instead of standard tables, SQLite features like constraints, foreign keys, or cascading deletes are not available. Currently, there is no direct support for cascading deletes on the client. However, you can achieve this by either: @@ -19,7 +19,7 @@ Currently, there is no direct support for cascading deletes on the client. Howev done [here](https://github.com/powersync-ja/powersync-js/blob/e77b1abfbed91988de1f4c707c24855cd66b2219/demos/react-supabase-todolist/src/app/utils/fts_setup.ts#L50) -# Example +## Example The following example is taken from the [React Native To-Do List demo app](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-native-supabase-todolist). It showcases how to delete a `list` and all its associated `todos` in a single transaction. @@ -36,6 +36,6 @@ It showcases how to delete a `list` and all its associated `todos` in a single t ``` - An important thing to note is that the local SQLite database will always match the backend database, as long as the tables are in the publication, when online. + An important thing to note is that the local SQLite database will always match the backend source database, as long as the tables are in the publication, when online. For example, if you delete a record from the local `lists` table and Supabase cascade deletes a record from the `todo` table, PowerSync will also delete the local `todo` record when online. \ No newline at end of file diff --git a/client-sdk-references/react-native-and-expo/expo-go-support.mdx b/client-sdks/frameworks/expo-go-support.mdx similarity index 91% rename from client-sdk-references/react-native-and-expo/expo-go-support.mdx rename to client-sdks/frameworks/expo-go-support.mdx index c38a026d..71b03e7f 100644 --- a/client-sdk-references/react-native-and-expo/expo-go-support.mdx +++ b/client-sdks/frameworks/expo-go-support.mdx @@ -1,6 +1,7 @@ --- title: "Expo Go Support" description: "PowerSync supports Expo Go with @powersync/adapter-sql-js" +sidebarTitle: "Expo Go Support" --- Expo Go is a sandbox environment that allows you to quickly test your application without building a development build. To enable PowerSync in Expo Go, we provide a JavaScript-based database adapter: [`@powersync/adapter-sql-js`](https://www.npmjs.com/package/@powersync/adapter-sql-js). @@ -108,9 +109,9 @@ export default function HomeScreen() { After adding PowerSync to your app: -1. [**Define what data to sync by setting up Sync Rules**](/usage/sync-rules) -2. [**Implement your SQLite client schema**](/client-sdk-references/react-native-and-expo#1-define-the-schema) -3. [**Connect to PowerSync and your backend**](/client-sdk-references/react-native-and-expo#3-integrate-with-your-backend) +1. [**Define what data to sync by setting up Sync Rules**](/sync/rules/overview) +2. [**Implement your SQLite client schema**](/client-sdks/reference/react-native-and-expo#1-define-the-client-side-schema) +3. [**Connect to PowerSync and your backend**](/client-sdks/reference/react-native-and-expo#3-integrate-with-your-backend) ## Data Persistence @@ -124,11 +125,11 @@ When you're ready to move beyond the Expo Go sandbox environment - whether for n - [OP-SQLite](https://www.npmjs.com/package/@powersync/op-sqlite) (Recommended) - Offers built-in encryption support and better React Native New Architecture compatibility - [React Native Quick SQLite](https://www.npmjs.com/package/@journeyapps/react-native-quick-sqlite) - Our original native adapter - + These database adapters cannot run in Expo Go because they require native code compilation. Specifically, PowerSync needs a SQLite implementation that can load our [Rust core extension](https://github.com/powersync-ja/powersync-sqlite-core), which isn't possible in Expo Go's prebuilt app container. - + -These adapters provide better performance, full SQLite consistency guarantees, and are suitable for both development builds and production deployment. See the SDKs [Installation](/client-sdk-references/react-native-and-expo#install-peer-dependencies) details for setup instructions. +These adapters provide better performance, full SQLite consistency guarantees, and are suitable for both development builds and production deployment. See the SDKs [Installation](/client-sdks/reference/react-native-and-expo#install-peer-dependencies) details for setup instructions. ### Switching Between Adapters - Example diff --git a/client-sdk-references/flutter/flutter-web-support.mdx b/client-sdks/frameworks/flutter-web-support.mdx similarity index 99% rename from client-sdk-references/flutter/flutter-web-support.mdx rename to client-sdks/frameworks/flutter-web-support.mdx index 9b889d48..3cf8b2b5 100644 --- a/client-sdk-references/flutter/flutter-web-support.mdx +++ b/client-sdks/frameworks/flutter-web-support.mdx @@ -1,6 +1,6 @@ --- title: "Flutter Web Support (Beta)" -sidebarTitle: "Web Support (Beta)" +sidebarTitle: "Flutter Web Support" --- diff --git a/tutorials/client/sdks/web/next-js.mdx b/client-sdks/frameworks/next-js.mdx similarity index 96% rename from tutorials/client/sdks/web/next-js.mdx rename to client-sdks/frameworks/next-js.mdx index f4018f22..946e19c8 100644 --- a/tutorials/client/sdks/web/next-js.mdx +++ b/client-sdks/frameworks/next-js.mdx @@ -1,5 +1,6 @@ --- title: "Next.js + PowerSync" +sidebarTitle: "Next.js" description: "A guide for creating a new Next.js application with PowerSync for offline/local first functionality" keywords: ["next.js", "web"] --- @@ -106,9 +107,9 @@ Run `pnpm dev` to start the development server and check that everything compile ## Configure a PowerSync Instance Now that we've got our project setup, let's create a new PowerSync Cloud instance and connect our client to it. -For the purposes of this demo, we'll be using Supabase as the source backend database that PowerSync will connect to. +For the purposes of this demo, we'll be using Supabase as the backend source database that PowerSync will connect to. -To set up a new PowerSync instance, follow the steps covered in the [Installation - Database Connection](/installation/database-connection) docs page. +To set up a new PowerSync instance, follow the steps covered in the [Installation - Database Connection](/configuration/source-db/connection) docs page. ## Configure PowerSync in your project ### Add core PowerSync files @@ -215,12 +216,12 @@ export class BackendConnector implements PowerSyncBackendConnector { ``` There are two core functions to this file: -* `fetchCredentials()` - Used to return a JWT token to the PowerSync service for authentication. -* `uploadData()` - Used to upload changes captured in the local SQLite database that need to be sent to the source backend database, in this case Supabase. We'll get back to this further down. +* `fetchCredentials()` - Used to return a JWT token to the PowerSync Service for authentication. +* `uploadData()` - Used to upload changes captured in the local SQLite database that need to be sent to the backend source database, in this case Supabase. We'll get back to this further down. You'll notice that we need to add a `.env` file to our project which will contain two variables: * `NEXT_PUBLIC_POWERSYNC_URL` - This is the PowerSync instance url. You can grab this from the PowerSync Cloud dashboard. -* `NEXT_PUBLIC_POWERSYNC_TOKEN` - For development purposes we'll be using a development token. To generate one, please follow the steps outlined in [Development Token](/installation/authentication-setup/development-tokens) from our installation docs. +* `NEXT_PUBLIC_POWERSYNC_TOKEN` - For development purposes we'll be using a development token. To generate one, please follow the steps outlined in [Development Token](/configuration/auth/development-tokens) from our installation docs. ### Create Providers @@ -276,7 +277,7 @@ export default SystemProvider; The `SystemProvider` will be responsible for initializing the `PowerSyncDatabase`. Here we supply a few arguments, such as the AppSchema we defined earlier along with very important properties such as `ssrMode: false`. PowerSync will not work when rendered server side, so we need to explicitly disable SSR. -We also instantiate our `BackendConnector` and pass an instance of that to `db.connect()`. This will connect to the PowerSync instance, validate the token supplied in the `fetchCredentials` function and then start syncing with the PowerSync service. +We also instantiate our `BackendConnector` and pass an instance of that to `db.connect()`. This will connect to the PowerSync instance, validate the token supplied in the `fetchCredentials` function and then start syncing with the PowerSync Service. #### DynamicSystemProvider.tsx diff --git a/client-sdk-references/react-native-and-expo/react-native-web-support.mdx b/client-sdks/frameworks/react-native-web-support.mdx similarity index 94% rename from client-sdk-references/react-native-and-expo/react-native-web-support.mdx rename to client-sdks/frameworks/react-native-web-support.mdx index 8b92917a..d9ded005 100644 --- a/client-sdk-references/react-native-and-expo/react-native-web-support.mdx +++ b/client-sdks/frameworks/react-native-web-support.mdx @@ -1,5 +1,6 @@ --- title: "React Native Web Support" +sidebarTitle: "React Native Web Support" --- [React Native for Web](https://necolas.github.io/react-native-web/) enables developers to use the same React Native codebase for both mobile and web platforms. @@ -7,7 +8,7 @@ title: "React Native Web Support" **Availability** -Support for React Native Web is available since versions 1.12.1 of the PowerSync [React Native SDK](/client-sdk-references/react-native-and-expo) and 1.8.0 if the [JavaScript Web SDK](/client-sdk-references/javascript-web), and is currently in a **beta** release. +Support for React Native Web is available since versions 1.12.1 of the PowerSync [React Native SDK](/client-sdks/reference/react-native-and-expo) and 1.8.0 if the [JavaScript Web SDK](/client-sdks/reference/javascript-web), and is currently in a **beta** release. A demo app showcasing this functionality is available here: @@ -20,7 +21,7 @@ To ensure that PowerSync features are fully supported in your React Native Web p ### 1. Install Web SDK -The [PowerSync Web SDK](/client-sdk-references/javascript-web), alongside the [PowerSync React Native SDK](/client-sdk-references/react-native-and-expo), is required for Web support. +The [PowerSync Web SDK](/client-sdks/reference/javascript-web), alongside the [PowerSync React Native SDK](/client-sdks/reference/react-native-and-expo), is required for Web support. See installation instructions [here](https://www.npmjs.com/package/@powersync/web). @@ -82,7 +83,7 @@ this.powersync = new PowerSyncDatabaseWeb({ }); ``` -This `PowerSyncDatabaseWeb` database will be used alongside the native `PowerSyncDatabase` to support platform-specific implementations. See the [Instantiating PowerSync](/client-sdk-references/react-native-and-expo/react-native-web-support#implementations) section below for more details. +This `PowerSyncDatabaseWeb` database will be used alongside the native `PowerSyncDatabase` to support platform-specific implementations. See the [Instantiating PowerSync](#implementations) section below for more details. ### 4. Enable multiple platforms diff --git a/client-sdks/frameworks/react.mdx b/client-sdks/frameworks/react.mdx new file mode 100644 index 00000000..cea0d214 --- /dev/null +++ b/client-sdks/frameworks/react.mdx @@ -0,0 +1,23 @@ +--- +title: "React Hooks" +--- + +The `@powersync/react` package provides React hooks for use with the [JavaScript Web SDK](/client-sdks/reference/javascript-web) or [React Native SDK](/client-sdks/reference/react-native-and-expo). These hooks are designed to support reactivity, and can be used to automatically re-render React components when query results update or to access PowerSync connectivity status changes. + + + +The main hooks available are: + +* `useQuery`: This allows you to access the results of a watched query. The response includes `isLoading`, `isFetching` and `error` properties. + +* `useStatus`: Access the PowerSync connectivity status. This can be used to update the UI based on whether the client is connected or not. + +* `useSuspenseQuery`: This hook also allows you to access the results of a watched query, but its loading and fetching states are handled through [Suspense](https://react.dev/reference/react/Suspense). It automatically converts certain loading/fetching states into Suspense signals, triggering Suspense boundaries in parent components. + + +For advanced watch query features like incremental updates and differential results for React Hooks, see [Live Queries / Watch Queries](/client-sdks/watch-queries). + + +The full API Reference and example code can be found here: + + \ No newline at end of file diff --git a/client-sdks/frameworks/tanstack.mdx b/client-sdks/frameworks/tanstack.mdx new file mode 100644 index 00000000..bbd35914 --- /dev/null +++ b/client-sdks/frameworks/tanstack.mdx @@ -0,0 +1,76 @@ +--- +title: "TanStack Query & TanStack DB" +description: "PowerSync integrates with multiple TanStack libraries." +--- + +## TanStack Query + +PowerSync integrates with [TanStack Query](https://tanstack.com/query/latest/docs/framework/react/overview) (formerly React Query) through the `@powersync/tanstack-react-query` package. + + + +This package wraps TanStack's `useQuery` and `useSuspenseQuery` hooks, bringing many of TanStack's advanced asynchronous state management features to PowerSync web and React Native applications, including: + +* **Loading and error states** via [`useQuery`](https://tanstack.com/query/latest/docs/framework/react/guides/queries) + +* [**React Suspense**](https://tanstack.com/query/latest/docs/framework/react/guides/suspense) **support**: `useSuspenseQuery` automatically converts certain loading states into Suspense signals, triggering Suspense boundaries in parent components. + +* [**Caching queries**](https://tanstack.com/query/latest/docs/framework/react/guides/caching): Queries are cached with a unique key and reused across the app, so subsequent instances of the same query won't refire unnecessarily. + +* **Built-in support for** [**pagination**](https://tanstack.com/query/latest/docs/framework/react/guides/paginated-queries) + + + #### Additional hooks + + We plan to support more TanStack Query hooks over time. If there are specific hooks you're interested in, please let us know on [Discord](https://discord.gg/powersync). + + +### Example Use Case + +When navigating to or refreshing a page, you may notice a brief UI "flicker" (10-50ms). Here are a few ways to manage this with TanStack Query: + +* **First load**: When a page loads for the first time, use a loading indicator or a Suspense fallback to handle queries. See the [examples](https://www.npmjs.com/package/@powersync/tanstack-react-query#usage). + +* **Subsequent loads**: With TanStack's query caching, subsequent loads of the same page won't refire queries, which reduces the flicker effect. + +* **Block navigation until components are ready**: Using `useSuspenseQuery`, you can ensure that navigation from page A to page B only happens after the queries for page B have loaded. You can do this by combining `useSuspenseQuery` with the `` element and React Router’s [`v7_startTransition`](https://reactrouter.com/en/main/upgrading/future#v7_starttransition) future flag, which blocks navigation until all suspending components are ready. + +### Usage and Examples + +For more examples and usage details, see the package [README](https://www.npmjs.com/package/@powersync/tanstack-react-query). + +The full API Reference can be found here: + + + +## TanStack DB + +[TanStack DB](https://tanstack.com/db/latest/) is a reactive client store that provides blazing-fast in-memory queries, optimistic updates, and cross-collection queries. When [combined](https://tanstack.com/db/latest/docs/collections/powersync-collection) with PowerSync, you get the best of both worlds: TanStack DB's powerful query capabilities with PowerSync's battle-tested offline-first and multi-tab capable sync engine. + + + The PowerSync TanStack DB collection is currently in an [Alpha](/resources/feature-status) release. + + +### TanStack DB Features: + +* **Blazing Fast In-Memory Queries**: Built on differential data flow, TanStack DB's live queries update incrementally (rather than re-running entire queries), making queries incredibly fast, even for complex queries across multiple collections. + +* **Reactive Data Flow**: Live queries automatically update when underlying data changes, triggering component re-renders only when necessary. + +* **Optimistic Updates**: Mutations apply instantly to the local state, providing immediate user feedback. TanStack DB maintains separate optimistic state that overlays on top of synced data, and automatically rolls back if the server request fails. + +* **Cross-Collection Queries**: Live queries support joins across collections, allowing you to load normalized data and then denormalize it through queries. + +### Framework Support: + +PowerSync works with all TanStack DB framework adapters: + +* React ([`@tanstack/react-db`](https://tanstack.com/db/latest/docs/framework/react/overview)) +* Vue ([`@tanstack/vue-db`](https://tanstack.com/db/latest/docs/framework/vue/overview)) +* Solid ([`@tanstack/solid-db`](https://tanstack.com/db/latest/docs/framework/solid/overview)) +* Svelte ([`@tanstack/svelte-db`](https://tanstack.com/db/latest/docs/framework/svelte/overview)) +* Angular ([`@tanstack/angular-db`](https://tanstack.com/db/latest/docs/framework/angular/overview)) + +### Documentation: + +For detailed documentation, examples, and API reference, see the [TanStack DB PowerSync Collection documentation](https://tanstack.com/db/latest/docs/collections/powersync-collection). diff --git a/client-sdks/frameworks/vue.mdx b/client-sdks/frameworks/vue.mdx new file mode 100644 index 00000000..cbe53dc5 --- /dev/null +++ b/client-sdks/frameworks/vue.mdx @@ -0,0 +1,21 @@ +--- +title: "Vue Composables" +--- + +The [`powersync/vue`](https://www.npmjs.com/package/@powersync/vue) package is a Vue-specific wrapper for PowerSync. It provides Vue [composables](https://vuejs.org/guide/reusability/composables) that are designed to support reactivity, and can be used to automatically re-render components when query results update or to access PowerSync connectivity status changes. + + + +The main hooks available are: + +* `useQuery`: This allows you to access the results of a watched query. The response includes `isLoading`, `isFetching` and `error` properties. + +* `useStatus`: Access the PowerSync connectivity status. This can be used to update the UI based on whether the client is connected or not. + + +For advanced watch query features like incremental updates and differential results for Vue Hooks, see [Live Queries / Watch Queries](/client-sdks/watch-queries). + + +The full API Reference and example code can be found here: + + \ No newline at end of file diff --git a/usage/use-case-examples/full-text-search.mdx b/client-sdks/full-text-search.mdx similarity index 97% rename from usage/use-case-examples/full-text-search.mdx rename to client-sdks/full-text-search.mdx index 651c62bc..1521589d 100644 --- a/usage/use-case-examples/full-text-search.mdx +++ b/client-sdks/full-text-search.mdx @@ -7,10 +7,10 @@ This requires creating a separate FTS5 table(s) to index the data, and updating Full-text search has been demonstrated in the following SDKs: -- [**Dart/Flutter SDK**](/client-sdk-references/flutter): Uses the [sqlite_async](https://pub.dev/documentation/sqlite_async/latest/) package for migrations -- [**JavaScript Web SDK**](/client-sdk-references/javascript-web): Requires version 0.5.0 or greater (including [wa-sqlite](https://github.com/powersync-ja/wa-sqlite) 0.2.0+) -- [**React Native SDK**](/client-sdk-references/react-native-and-expo): Requires version 1.16.0 or greater (including [@powersync/react-native-quick-sqlite](https://github.com/powersync-ja/react-native-quick-sqlite) 2.2.1+) -- [**Swift SDK**](/client-sdk-references/swift) +- [**Dart/Flutter SDK**](/client-sdks/reference/flutter): Uses the [sqlite_async](https://pub.dev/documentation/sqlite_async/latest/) package for migrations +- [**JavaScript Web SDK**](/client-sdks/reference/javascript-web): Requires version 0.5.0 or greater (including [wa-sqlite](https://github.com/powersync-ja/wa-sqlite) 0.2.0+) +- [**React Native SDK**](/client-sdks/reference/react-native-and-expo): Requires version 1.16.0 or greater (including [@powersync/react-native-quick-sqlite](https://github.com/powersync-ja/react-native-quick-sqlite) 2.2.1+) +- [**Swift SDK**](/client-sdks/reference/swift) Note that the availability of FTS in our SDKs is dependent on the underlying `sqlite` package used. It may be supported in our other SDKs, especially if the `FTS5` extension is available, but would be untested. Check with us on [Discord](https://discord.gg/powersync) if you have a use case and need help getting started. diff --git a/usage/use-case-examples/high-performance-diffs.mdx b/client-sdks/high-performance-diffs.mdx similarity index 86% rename from usage/use-case-examples/high-performance-diffs.mdx rename to client-sdks/high-performance-diffs.mdx index 12729ae4..59672a4f 100644 --- a/usage/use-case-examples/high-performance-diffs.mdx +++ b/client-sdks/high-performance-diffs.mdx @@ -6,7 +6,7 @@ description: 'Efficiently get row changes using trigger-based table diffs (JS)' # Overview -While [basic/incremental watch queries](/usage/use-case-examples/watch-queries) enable reactive UIs by automatically re‑running queries when underlying data changes and returning updated results, they don't specify which individual rows were modified. To get these details, you can use [**differential watch queries**](/usage/use-case-examples/watch-queries#differential-watch-queries), which return a structured diff between successive query results. However, on large result sets they can be slow because they re‑run the query and compare full results (e.g., scanning ~1,000 rows to detect 1 new item). That’s why we introduced **trigger‑based table diffs**: a more performant approach that uses SQLite triggers to record changes on a table as they happen. This means that the overhead associated with tracking these changes overhead is more proportional to the number of rows inserted, updated, or deleted. +While [basic/incremental watch queries](/client-sdks/watch-queries) enable reactive UIs by automatically re‑running queries when underlying data changes and returning updated results, they don't specify which individual rows were modified. To get these details, you can use [**differential watch queries**](/client-sdks/watch-queries#differential-watch-queries), which return a structured diff between successive query results. However, on large result sets they can be slow because they re‑run the query and compare full results (e.g., scanning ~1,000 rows to detect 1 new item). That’s why we introduced **trigger‑based table diffs**: a more performant approach that uses SQLite triggers to record changes on a table as they happen. This means that the overhead associated with tracking these changes overhead is more proportional to the number of rows inserted, updated, or deleted. **JavaScript Only**: Trigger-based table diffs are currently only supported in our JavaScript SDKs, starting from: @@ -32,7 +32,7 @@ Join our [Discord community](https://discord.gg/powersync) to share your experie - **Storage/shape**: Trigger-based diffs store changes as rows in a temporary SQLite table that you can query with SQL. Differential watch diffs are exposed to app code as JS objects/arrays. - **Filtering**: Trigger-based diffs can filter/skip storing diff records inside the SQLite trigger, which prevents emissions on a lower level. Differential watches query the SQLite DB on any change to the query's dependent tables, and the changes are filtered after querying SQLite. -In summary, **differential watch queries** are the most flexible (they work with arbitrary, multi‑table queries), but they can be slow on large result sets. For those cases, **trigger-based diffs** are more efficient, but they only track a single table and add some write overhead. For usage and examples of differential watch queries, see [Differential Watch Queries](/usage/use-case-examples/watch-queries#differential-watch-queries). +In summary, **differential watch queries** are the most flexible (they work with arbitrary, multi‑table queries), but they can be slow on large result sets. For those cases, **trigger-based diffs** are more efficient, but they only track a single table and add some write overhead. For usage and examples of differential watch queries, see [Differential Watch Queries](/client-sdks/watch-queries#differential-watch-queries). ## Trigger-based diffs @@ -50,7 +50,7 @@ Trigger-based diffs create temporary SQLite triggers and a temporary table to re - Column filters are applied by inspecting JSON changes in the underlying row and determining whether the configured columns changed. - Diff rows can be queried as if they were real columns (not raw JSON) using the `withExtractedDiff(...)` helper. - You can also create your own triggers manually (for example, as shown in the [Full‑Text Search example](/usage/use-case-examples/full-text-search)), but be mindful of the view/trigger limitation and target the underlying table rather than the view. + You can also create your own triggers manually (for example, as shown in the [Full‑Text Search example](/client-sdks/full-text-search)), but be mindful of the view/trigger limitation and target the underlying table rather than the view. ## Tracking and reacting to changes (recommended) diff --git a/usage/use-case-examples/infinite-scrolling.mdx b/client-sdks/infinite-scrolling.mdx similarity index 82% rename from usage/use-case-examples/infinite-scrolling.mdx rename to client-sdks/infinite-scrolling.mdx index 74cc32ed..19725cd9 100644 --- a/usage/use-case-examples/infinite-scrolling.mdx +++ b/client-sdks/infinite-scrolling.mdx @@ -19,13 +19,13 @@ This means that in many cases, you can sync a sufficient amount of data to let a ### 2) Control data sync using client parameters -PowerSync supports the use of [client parameters](/usage/sync-rules/advanced-topics/client-parameters) which are specified directly by the client (i.e. not only through the [authentication token](/installation/authentication-setup/custom)). The app can dynamically change these parameters on the client-side and they can be accessed in sync rules on the server-side. The developer can use these parameters to limit/control which data is synced, but since they are not trusted (because they are not passed via the JWT authentication token) they should not be used for access control. You should still filter data by e.g. user ID for access control purposes (using [token parameters](/usage/sync-rules/parameter-queries) from the JWT). +PowerSync supports the use of [client parameters](/sync/rules/client-parameters) which are specified directly by the client (i.e. not only through the [authentication token](/configuration/auth/custom)). The app can dynamically change these parameters on the client-side and they can be accessed in sync rules on the server-side. The developer can use these parameters to limit/control which data is synced, but since they are not trusted (because they are not passed via the JWT authentication token) they should not be used for access control. You should still filter data by e.g. user ID for access control purposes (using [token parameters](/sync/rules/parameter-queries) from the JWT). Usage example: To lazy-load/lazy-sync data for infinite scrolling, you could split your data into 'pages' and use a client parameter to specify which pages to sync to a user. | Pros | Cons | | --------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------- | -| Does not require updating flags in your backend database. Enables client-side control over what data is synced. | We can only sync additional data when the user is online. There will be latency while the user waits for the additional data to sync. | +| Does not require updating flags in your backend source database. Enables client-side control over what data is synced. | We can only sync additional data when the user is online. There will be latency while the user waits for the additional data to sync. | ### 3) Sync limited data and then load more data from an API @@ -37,7 +37,7 @@ In this scenario we can sync a smaller number of rows to the user initially. If ### 4) Client-side triggers a server-side function to flag data to sync -You could add a flag to certain records in your backend database which are used by your [Sync Rules](/usage/sync-rules) to determine which records to sync to specific users. Then your app could make an API call which triggers a function that updates the flags on certain records, causing more records to be synced to the user. +You could add a flag to certain records in your backend source database which are used by your [Sync Rules](/sync/rules/overview) to determine which records to sync to specific users. Then your app could make an API call which triggers a function that updates the flags on certain records, causing more records to be synced to the user. | Pros | Cons | | ---------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | diff --git a/client-sdks/javascript-web/usage-examples.mdx b/client-sdks/javascript-web/usage-examples.mdx new file mode 100644 index 00000000..4f690643 --- /dev/null +++ b/client-sdks/javascript-web/usage-examples.mdx @@ -0,0 +1,7 @@ +--- +title: "Usage Examples" +description: "Code snippets and guidelines for common scenarios" +sidebarTitle: "JavaScript Web Usage Examples" +--- + + diff --git a/client-sdk-references/flutter/flutter-orm-support.mdx b/client-sdks/orms/flutter-orm-support.mdx similarity index 92% rename from client-sdk-references/flutter/flutter-orm-support.mdx rename to client-sdks/orms/flutter-orm-support.mdx index df90ab5f..b1a54f57 100644 --- a/client-sdk-references/flutter/flutter-orm-support.mdx +++ b/client-sdks/orms/flutter-orm-support.mdx @@ -1,10 +1,7 @@ --- -title: "Flutter ORM Support" -sidebarTitle: "ORM Support" +title: "Dart/Flutter ORM Support" --- -An introduction to using ORMs with PowerSync is available on our blog [here](https://www.powersync.com/blog/using-orms-with-powersync). - ORM support is available via the following package (currently in a beta release): -This package enables using [Drizzle](https://orm.drizzle.team/) with the PowerSync [React Native](/client-sdk-references/react-native-and-expo) and [JavaScript Web](/client-sdk-references/javascript-web) SDKs. +This package enables using [Drizzle](https://orm.drizzle.team/) with the PowerSync [React Native](/client-sdks/reference/react-native-and-expo) and [JavaScript Web](/client-sdks/reference/javascript-web) SDKs. ## Setup @@ -69,7 +69,7 @@ export const db = wrapPowerSyncWithDrizzle(powerSyncDb, { ## Schema Conversion -The `DrizzleAppSchema` constructor simplifies the process of integrating Drizzle with PowerSync. It infers the local [PowerSync schema](/installation/client-side-setup/define-your-schema) from your Drizzle schema definition, providing a unified development experience. +The `DrizzleAppSchema` constructor simplifies the process of integrating Drizzle with PowerSync. It infers the [client-side PowerSync schema](/intro/setup-guide#define-your-client-side-schema) from your Drizzle schema definition, providing a unified development experience. As the PowerSync schema only supports SQLite types (`text`, `integer`, and `real`), the same limitation extends to the Drizzle table definitions. diff --git a/client-sdk-references/javascript-web/javascript-orm/kysely.mdx b/client-sdks/orms/js/kysely.mdx similarity index 96% rename from client-sdk-references/javascript-web/javascript-orm/kysely.mdx rename to client-sdks/orms/js/kysely.mdx index c7e66aef..0a1f6603 100644 --- a/client-sdk-references/javascript-web/javascript-orm/kysely.mdx +++ b/client-sdks/orms/js/kysely.mdx @@ -23,7 +23,7 @@ Set up the PowerSync Database and wrap it with Kysely. import { wrapPowerSyncWithKysely } from '@powersync/kysely-driver'; import { PowerSyncDatabase } from '@powersync/web'; -// Define schema as in: https://docs.powersync.com/usage/installation/client-side-setup/define-your-schema +// Define schema as in: https://docs.powersync.com/intro/setup-guide#define-your-client-side-schema import { appSchema } from './schema'; export const powerSyncDb = new PowerSyncDatabase({ @@ -42,7 +42,7 @@ export const db = wrapPowerSyncWithKysely(powerSyncDb); import { wrapPowerSyncWithKysely } from '@powersync/kysely-driver'; import { PowerSyncDatabase } from "@powersync/web"; -// Define schema as in: https://docs.powersync.com/usage/installation/client-side-setup/define-your-schema +// Define schema as in: https://docs.powersync.com/intro/setup-guide#define-your-client-side-schema import { appSchema, Database } from "./schema"; export const powerSyncDb = new PowerSyncDatabase({ diff --git a/client-sdk-references/javascript-web/javascript-orm/overview.mdx b/client-sdks/orms/js/overview.mdx similarity index 66% rename from client-sdk-references/javascript-web/javascript-orm/overview.mdx rename to client-sdks/orms/js/overview.mdx index 8ae0d91e..26e10194 100644 --- a/client-sdk-references/javascript-web/javascript-orm/overview.mdx +++ b/client-sdks/orms/js/overview.mdx @@ -4,20 +4,18 @@ description: "Reference for using ORMs in PowerSync's JavaScript-based SDKs" sidebarTitle: Overview --- -An introduction to using ORMs with PowerSync is available on our blog [here](https://www.powersync.com/blog/using-orms-with-powersync). - The following ORMs and query libraries are officially supported: Kysely query builder for PowerSync. Drizzle ORM for PowerSync. diff --git a/client-sdk-references/javascript-web/javascript-orm/tanstack-db.mdx b/client-sdks/orms/js/tanstack-db.mdx similarity index 100% rename from client-sdk-references/javascript-web/javascript-orm/tanstack-db.mdx rename to client-sdks/orms/js/tanstack-db.mdx diff --git a/client-sdk-references/kotlin/libraries/overview.mdx b/client-sdks/orms/kotlin/overview.mdx similarity index 78% rename from client-sdk-references/kotlin/libraries/overview.mdx rename to client-sdks/orms/kotlin/overview.mdx index 3991e4d3..c7333572 100644 --- a/client-sdk-references/kotlin/libraries/overview.mdx +++ b/client-sdks/orms/kotlin/overview.mdx @@ -1,10 +1,10 @@ --- -title: "SQL libraries" -description: "Reference for using PowerSync with SQL mapping libraries." +title: "Kotlin SQL Libraries" +description: "Reference for using PowerSync with SQL mapping libraries on Kotlin." sidebarTitle: "Overview" --- -The PowerSync Kotlin SDK allows syncing SQLite databases with your backend database, and gives you full control over which queries you run on your client. +The PowerSync Kotlin SDK allows syncing SQLite databases with your backend source database, and gives you full control over which queries you run on your client. Manually writing SQL queries and parsing results can be prone to errors though. Libraries like [SQLDelight](https://sqldelight.github.io/sqldelight) and [Room](https://developer.android.com/jetpack/androidx/releases/room) make this process safer by validating your schema and queries at compile-time, as well as generating code to map from raw SQLite rows into statically typed structures. @@ -20,15 +20,15 @@ Starting with version `1.6.0` of the PowerSync Kotlin SDK, both SQLDelight and R - + Use SQLDelight on PowerSync databases. - + Use PowerSync with Room databases. -If you're not sure which library to use, consider that Room requires [raw tables](/usage/use-case-examples/raw-tables) and is more complex to set up, so: +If you're not sure which library to use, consider that Room requires [raw tables](/client-sdks/advanced/raw-tables) and is more complex to set up, so: - SQLDelight is easier to use if you're starting with an existing PowerSync database. - We mainly recommend the Room integration if you have an existing Room database you want to add sync to. \ No newline at end of file diff --git a/client-sdk-references/kotlin/libraries/room.mdx b/client-sdks/orms/kotlin/room.mdx similarity index 93% rename from client-sdk-references/kotlin/libraries/room.mdx rename to client-sdks/orms/kotlin/room.mdx index 0f6b7ed8..962809d4 100644 --- a/client-sdk-references/kotlin/libraries/room.mdx +++ b/client-sdks/orms/kotlin/room.mdx @@ -30,7 +30,7 @@ When adopting the Room integration for PowerSync: PowerSync acts as an addon to your existing Room database, which means that (unlike with most other PowerSync SDKs) you are still responsible for schema management. -Room requires [raw tables](/usage/use-case-examples/raw-tables), as the views managed by PowerSync are incompatible with +Room requires [raw tables](/client-sdks/advanced/raw-tables), as the views managed by PowerSync are incompatible with the schema verification when Room opens the database. To add PowerSync to your Room database, @@ -92,9 +92,9 @@ Here: - The SQL statements must match the schema created by Room. - The `RawTable.name` and `PendingStatementParameter.Column` values must match the table and column names of the synced - table from the PowerSync service, derived from your sync rules. + table from the PowerSync Service, derived from your sync rules. -For more details, see [raw tables](/usage/use-case-examples/raw-tables). +For more details, see [raw tables](/client-sdks/advanced/raw-tables). After these steps, you can open your Room database like you normally would. Then, you can use the following method to obtain a `PowerSyncDatabase` instance which is backed by Room: @@ -155,7 +155,7 @@ todoItemsDao.watchAll().collect { items -> To transfer local writes from Room to PowerSync: -1. Create triggers on your Room tables to insert rows into `ps_crud`. See [raw tables](/usage/use-case-examples/raw-tables#capture-local-writes-with-triggers) for details. +1. Create triggers on your Room tables to insert rows into `ps_crud`. See [raw tables](/client-sdks/advanced/raw-tables#capture-local-writes-with-triggers) for details. 2. Ensure the `RoomConnectionPool` is constructed with your `schema` (as shown above). When the schema is provided, the pool will notify PowerSync about writes to every raw table referenced in the schema. 3. Alternatively, after performing writes through Room, invoke: diff --git a/client-sdk-references/kotlin/libraries/sqldelight.mdx b/client-sdks/orms/kotlin/sqldelight.mdx similarity index 95% rename from client-sdk-references/kotlin/libraries/sqldelight.mdx rename to client-sdks/orms/kotlin/sqldelight.mdx index c6329cd4..1a07f0aa 100644 --- a/client-sdk-references/kotlin/libraries/sqldelight.mdx +++ b/client-sdks/orms/kotlin/sqldelight.mdx @@ -10,7 +10,7 @@ SQLDelight support is currently in beta. There are some limitations to be aware of: 1. PowerSync migrates all databases to `user_version` 1 when created (it will never downgrade a database). If you want to use SQLDelight's schema versioning, start from version `2`. -2. `CREATE TABLE` statements in `.sq` files are only used at build time to verify queries. At runtime, PowerSync creates tables as views from your schema and ignores those statements. If you want SQLDelight to manage the schema, configure PowerSync to use [raw tables](/usage/use-case-examples/raw-tables). +2. `CREATE TABLE` statements in `.sq` files are only used at build time to verify queries. At runtime, PowerSync creates tables as views from your schema and ignores those statements. If you want SQLDelight to manage the schema, configure PowerSync to use [raw tables](/client-sdks/advanced/raw-tables). 3. Functions and tables provided by the PowerSync core SQLite extension are not visible to `.sq` files currently. We may revisit this with a custom dialect in the future. @@ -31,7 +31,7 @@ sync). ## Installation -This guide assumes that you already have a PowerSync database for Kotlin. See the [general documentation](/client-sdk-references/kotlin) for notes on getting started with PowerSync. +This guide assumes that you already have a PowerSync database for Kotlin. See the [general documentation](/client-sdks/reference/kotlin) for notes on getting started with PowerSync. To use SQLDelight, you can generally follow [SQLDelight](https://sqldelight.github.io/sqldelight/2.1.0/multiplatform_sqlite/) documentation. A few steps are different though, and these are highlighted here. @@ -65,7 +65,7 @@ sqldelight { ## Usage -Open a PowerSync database [in the usual way](https://docs.powersync.com/client-sdk-references/kotlin#getting-started) +Open a PowerSync database [in the usual way](https://docs.powersync.com/client-sdks/reference/kotlin#getting-started) and finally pass it to the constructor of your generated SQLDelight database: ```kotlin diff --git a/client-sdks/orms/overview.mdx b/client-sdks/orms/overview.mdx new file mode 100644 index 00000000..38d00c51 --- /dev/null +++ b/client-sdks/orms/overview.mdx @@ -0,0 +1,28 @@ +--- +title: "ORM Support Overview" +sidebarTitle: "Overview" +--- + +## Our Approach to ORM Support + +As much as some developers love to drop into raw SQL for advanced queries, it can be annoying to have to write SQL for simple queries, often because there’s no type-safety. Using an ORM helps address this challenge. + +With PowerSync our philosophy is to not force a specific ORM on developers. Instead, we decided to allow any approach from raw SQL queries to working with popular ORM libraries. + +We specifically avoid implementing our own ORM since we feel it's better to support popular existing ORMs, which likely do a much better job than we can. It also makes it easier to switch to/from PowerSync if you can keep most of your database code the same. + + +## Platform-Specific Information + + + + + + + + + + +## See Also + +For more details, refer to our blog post: [Using ORMs With PowerSync](https://www.powersync.com/blog/using-orms-with-powersync) \ No newline at end of file diff --git a/client-sdk-references/swift/grdb.mdx b/client-sdks/orms/swift/grdb.mdx similarity index 98% rename from client-sdk-references/swift/grdb.mdx rename to client-sdks/orms/swift/grdb.mdx index 01d0ad49..7be02556 100644 --- a/client-sdk-references/swift/grdb.mdx +++ b/client-sdks/orms/swift/grdb.mdx @@ -1,5 +1,6 @@ --- title: "GRDB (Alpha)" +sidebarTitle: "Swift: GRDB Library" --- PowerSync integrates with the [GRDB library](https://github.com/groue/GRDB.swift), a powerful SQLite tool for Swift development. GRDB is a full-fledged SQLite ecosystem that offers SQLite connection creation and pooling, SQL generation (ORM functionality), database observation (reactive queries), robust concurrency, migrations, and SwiftUI integration with [GRDBQuery](https://github.com/groue/GRDBQuery). @@ -29,7 +30,7 @@ When using GRDB with PowerSync: ## Setup -This guide assumes that you have completed the [Getting Started](/client-sdk-references/swift#getting-started) steps in the SDK documentation, or are at least familiar with them. The GRDB-specific configuration described below applies to the "Instantiate the PowerSync Database" step (step 2) in the Getting Started guide. +This guide assumes that you have completed the [Getting Started](/client-sdks/reference/swift#getting-started) steps in the SDK documentation, or are at least familiar with them. The GRDB-specific configuration described below applies to the "Instantiate the PowerSync Database" step (step 2) in the Getting Started guide. To set up PowerSync with GRDB, create a `DatabasePool` with PowerSync configuration: diff --git a/client-sdk-references/introduction.mdx b/client-sdks/overview.mdx similarity index 92% rename from client-sdk-references/introduction.mdx rename to client-sdks/overview.mdx index f39acc3f..0e51fc92 100644 --- a/client-sdk-references/introduction.mdx +++ b/client-sdks/overview.mdx @@ -1,5 +1,5 @@ --- -title: "Introduction" +title: "Overview" description: "PowerSync supports multiple client-side frameworks with official SDKs" --- diff --git a/client-sdks/reading-data.mdx b/client-sdks/reading-data.mdx new file mode 100644 index 00000000..f76c5550 --- /dev/null +++ b/client-sdks/reading-data.mdx @@ -0,0 +1,110 @@ +--- +title: "Reading Data" +description: "How to read data from the local SQLite database using SQL queries" +sidebarTitle: "Overview" +--- + +On the client-side, you can read data directly from the local SQLite database using standard SQL queries. + +## Basic Queries + +Read data using SQL queries: + + + ```typescript React Native, Web, Node.js & Capacitor (TS) + // Get all todos + const todos = await db.getAll('SELECT * FROM todos'); + + // Get a single todo + const todo = await db.get('SELECT * FROM todos WHERE id = ?', [todoId]); + + // Watch for changes (reactive query) + const stream = db.watch('SELECT * FROM todos WHERE list_id = ?', [listId]); + for await (const todos of stream) { + // Update UI when data changes + console.log(todos); + } + ``` + + ```kotlin Kotlin + // Get all todos + val todos = database.getAll("SELECT * FROM todos") { cursor -> + Todo.fromCursor(cursor) + } + + // Get a single todo + val todo = database.get("SELECT * FROM todos WHERE id = ?", listOf(todoId)) { cursor -> + Todo.fromCursor(cursor) + } + + // Watch for changes + database.watch("SELECT * FROM todos WHERE list_id = ?", listOf(listId)) + .collect { todos -> + // Update UI when data changes + } + ``` + + ```swift Swift + // Get all todos + let todos = try await db.getAll( + sql: "SELECT * FROM todos", + mapper: { cursor in + TodoContent( + description: try cursor.getString(name: "description")!, + completed: try cursor.getBooleanOptional(name: "completed") + ) + } + ) + + // Watch for changes + for try await todos in db.watch( + sql: "SELECT * FROM todos WHERE list_id = ?", + parameters: [listId] + ) { + // Update UI when data changes + } + ``` + + ```dart Dart/Flutter + // Get all todos + final todos = await db.getAll('SELECT * FROM todos'); + + // Get a single todo + final todo = await db.get('SELECT * FROM todos WHERE id = ?', [todoId]); + + // Watch for changes + db.watch('SELECT * FROM todos WHERE list_id = ?', [listId]) + .listen((todos) { + // Update UI when data changes + }); + ``` + + ```csharp .NET + // Use db.Get() to fetch a single row: + Console.WriteLine(await db.Get("SELECT powersync_rs_version();")); + + // Or db.GetAll() to fetch all: + // Where List result is defined: + // record ListResult(string id, string name, string owner_id, string created_at); + Console.WriteLine(await db.GetAll("SELECT * FROM lists;")); + ``` + + +## Live Queries / Watch Queries + +For reactive UI updates that automatically refresh when data changes, use watch queries. These queries execute whenever dependent tables are modified. + +See [Live Queries / Watch Queries](/client-sdks/watch-queries) for more details. + +## ORM Support + +PowerSync integrates with popular ORM libraries. Using an ORM is often preferable to writing raw SQL queries, as they provide type safety and other benefits. Many ORMs come with additional tooling that improves the experience of working with SQLite or in specific frameworks. +To learn which ORMs PowerSync supports and how to get started, see [ORMs Overview](/client-sdks/orms/overview). + +## Advanced Topics + +- [Usage Examples](/client-sdks/usage-examples) - Code examples for common use cases +- [Full-Text Search](/client-sdks/full-text-search) - Full-text search using the [SQLite FTS5 extension](https://www.sqlite.org/fts5.html) +- [Query JSON in SQLite](/client-sdks/advanced/query-json-in-sqlite) - Learn how to work with JSON data in SQLite +- [Infinite Scrolling](/client-sdks/infinite-scrolling) - Efficiently load large datasets +- [High Performance Diffs](/client-sdks/high-performance-diffs) - Efficiently get row changes for large datasets diff --git a/client-sdk-references/capacitor.mdx b/client-sdks/reference/capacitor.mdx similarity index 72% rename from client-sdk-references/capacitor.mdx rename to client-sdks/reference/capacitor.mdx index 353ba2dd..054c2f98 100644 --- a/client-sdk-references/capacitor.mdx +++ b/client-sdks/reference/capacitor.mdx @@ -1,7 +1,7 @@ --- title: "Capacitor (alpha)" -description: "Full SDK reference for using PowerSync in Capacitor clients" -sidebarTitle: "Overview" +description: "Full SDK guide for using PowerSync in Capacitor clients" +sidebarTitle: "Capacitor" --- import SdkFeatures from '/snippets/sdk-features.mdx'; @@ -21,7 +21,7 @@ import LocalOnly from '/snippets/local-only-escape.mdx'; Full API reference for the SDK - + Gallery of example projects/demo apps built with Capacitor and PowerSync @@ -29,24 +29,22 @@ import LocalOnly from '/snippets/local-only-escape.mdx'; - - This SDK is currently in an [**alpha** release](/resources/feature-status). - The SDK is largely built on our stable [Web SDK](/client-sdk-references/javascript-web), so that functionality can be considered stable. However, the [Capacitor Community SQLite](https://github.com/capacitor-community/sqlite) integration for mobile platforms is in alpha for real-world testing and feedback. There are [known limitations](#limitations) currently. + The SDK is largely built on our stable [Web SDK](/client-sdks/reference/javascript-web), so that functionality can be considered stable. However, the [Capacitor Community SQLite](https://github.com/capacitor-community/sqlite) integration for mobile platforms is in alpha for real-world testing and feedback. There are [known limitations](#limitations) currently. - + **Built on the Web SDK** - The PowerSync Capacitor SDK is built on top of the [PowerSync Web SDK](/client-sdk-references/javascript-web). It shares the same API and usage patterns as the Web SDK. The main differences are: + The PowerSync Capacitor SDK is built on top of the [PowerSync Web SDK](/client-sdks/reference/javascript-web). It shares the same API and usage patterns as the Web SDK. The main differences are: - Uses Capacitor-specific SQLite implementation (`@capacitor-community/sqlite`) for native Android and iOS platforms - Certain features are not supported on native Android and iOS platforms, see [limitations](#limitations) below for details - All code examples from the Web SDK apply to Capacitor — use `@powersync/web` for imports instead of `@powersync/capacitor`. See the [JavaScript Web SDK reference](/client-sdk-references/javascript-web) for ORM support, SPA framework integration, and developer notes. - + All code examples from the Web SDK apply to Capacitor — use `@powersync/web` for imports instead of `@powersync/capacitor`. See the [JavaScript Web SDK reference](/client-sdks/reference/javascript-web) for ORM support, SPA framework integration, and developer notes. + ### SDK Features @@ -58,26 +56,22 @@ import LocalOnly from '/snippets/local-only-escape.mdx'; ## Getting Started -Before implementing the PowerSync SDK in your project, make sure you have completed these steps: - -- Signed up for a PowerSync Cloud account ([here](https://accounts.journeyapps.com/portal/powersync-signup?s=docs)) or [self-host PowerSync](/self-hosting/getting-started). -- [Configured your backend database](/installation/database-setup) and connected it to your PowerSync instance. -- [Installed](/client-sdk-references/capacitor#installation) the PowerSync Capacitor SDK. +**Prerequisites**: To sync data between your client-side app and your backend source database, you must have completed the necessary setup for PowerSync, which includes connecting your source database to the PowerSync Service and deploying Sync Rules (steps 1-4 in the [Setup Guide](/intro/setup-guide)). -### 1. Define the Schema +### 1. Define the Client-Side Schema -The first step is defining the schema for the local SQLite database. +import SdkClientSideSchema from '/snippets/sdk-client-side-schema.mdx'; -This schema represents a "view" of the downloaded data. No migrations are required — the schema is applied directly when the local PowerSync database is constructed (as we'll show in the next step). + -The types available are `text`, `integer` and `real`. These should map directly to the values produced by the [Sync Rules](/usage/sync-rules). If a value doesn't match, it is cast automatically. For details on how Postgres types are mapped to the types below, see the section on [Types](/usage/sync-rules/types) in the _Sync Rules_ documentation. +The types available are `text`, `integer` and `real`. These should map directly to the values produced by the [Sync Rules](/sync/rules/overview). If a value doesn't match, it is cast automatically. For details on how backend source database types are mapped to the SQLite types, see [Types](/sync/types). **Example**: - **Note on imports**: While you install `@powersync/capacitor`, the Capacitor SDK extends the Web SDK so you import general components from `@powersync/web` (installed as a peer dependency). See the [JavaScript Web SDK schema definition section](/client-sdk-references/javascript-web#1-define-the-schema) for more advanced examples. + **Note on imports**: While you install `@powersync/capacitor`, the Capacitor SDK extends the Web SDK so you import general components from `@powersync/web` (installed as a peer dependency). See the [JavaScript Web SDK schema definition section](/client-sdks/reference/javascript-web#1-define-the-client-side-schema) for more advanced examples. ```js @@ -116,15 +110,13 @@ export type TodoRecord = Database['todos']; export type ListRecord = Database['lists']; ``` - + **Note**: No need to declare a primary key `id` column, as PowerSync will automatically create this. - + ### 2. Instantiate the PowerSync Database -Next, you need to instantiate the PowerSync database — this is the core managed database. - -Its primary functions are to record all changes in the local database, whether online or offline. In addition, it automatically uploads changes to your app backend when connected. +Next, you need to instantiate the PowerSync database. PowerSync streams changes from your backend source database into the client-side SQLite database, based on your [Sync Rules](/sync/rules/overview). In your client-side app, you can read from and write to the local SQLite database, whether the user is online or offline. **Example**: @@ -171,7 +163,9 @@ const db = new PowerSyncDatabase({ }); ``` -Once you've instantiated your PowerSync database, you will need to call the [connect()](https://powersync-ja.github.io/powersync-js/web-sdk/classes/AbstractPowerSyncDatabase#connect) method to activate it. +Once you've instantiated your PowerSync database, call the [connect()](https://powersync-ja.github.io/powersync-js/web-sdk/classes/AbstractPowerSyncDatabase#connect) method to sync data with your backend. + + ```js export const setupPowerSync = async () => { @@ -183,23 +177,20 @@ export const setupPowerSync = async () => { ### 3. Integrate with your Backend -The PowerSync backend connector provides the connection between your application backend and the PowerSync client-side managed SQLite database. - -It is used to: +The PowerSync backend connector provides the connection between your application backend and the PowerSync client-side managed SQLite database. It is used to: 1. Retrieve an auth token to connect to the PowerSync instance. -2. Apply local changes on your backend application server (and from there, to your backend database) +2. Upload client-side writes to your backend API. Any writes that are made to the SQLite database are placed into an upload queue by the PowerSync Client SDK and automatically uploaded to your app backend (where you apply those changes to the backend source database) when the user is connected. Accordingly, the connector must implement two methods: -1. [PowerSyncBackendConnector.fetchCredentials](https://github.com/powersync-ja/powersync-js/blob/ed5bb49b5a1dc579050304fab847feb8d09b45c7/packages/common/src/client/connection/PowerSyncBackendConnector.ts#L16) - This is called every couple of minutes and is used to obtain credentials for your app backend API. -\> See [Authentication Setup](/installation/authentication-setup) for instructions on how the credentials should be generated. -2. [PowerSyncBackendConnector.uploadData](https://github.com/powersync-ja/powersync-js/blob/ed5bb49b5a1dc579050304fab847feb8d09b45c7/packages/common/src/client/connection/PowerSyncBackendConnector.ts#L24) - Use this to upload client-side changes to your app backend. - -\> See [Writing Client Changes](/installation/app-backend-setup/writing-client-changes) for considerations on the app backend implementation. +1. [PowerSyncBackendConnector.fetchCredentials](https://github.com/powersync-ja/powersync-js/blob/ed5bb49b5a1dc579050304fab847feb8d09b45c7/packages/common/src/client/connection/PowerSyncBackendConnector.ts#L16) - This method will be automatically invoked by the PowerSync Client SDK every couple of minutes to obtain authentication credentials. See [Authentication Setup](/configuration/auth/overview) for instructions on how the credentials should be generated. +2. [PowerSyncBackendConnector.uploadData](https://github.com/powersync-ja/powersync-js/blob/ed5bb49b5a1dc579050304fab847feb8d09b45c7/packages/common/src/client/connection/PowerSyncBackendConnector.ts#L24) - This method will be automatically invoked by the PowerSync Client SDK whenever it needs to upload client-side writes to your app backend via your backend API. Therefore, in your implememtation, you need to define how your backend API is called. See [Writing Client Changes](/handling-writes/writing-client-changes) for considerations on the app backend implementation. **Example**: - See the [JavaScript Web SDK backend integration section](/client-sdk-references/javascript-web#3-integrate-with-your-backend) for connector examples with Supabase and Firebase authentication, and handling `uploadData` with batch operations. + See the [JavaScript Web SDK backend integration section](/client-sdks/reference/javascript-web#3-integrate-with-your-backend) for connector examples with Supabase and Firebase authentication, and handling `uploadData` with batch operations. ```js @@ -207,14 +198,11 @@ import { UpdateType } from '@powersync/web'; export class Connector { async fetchCredentials() { - // Implement fetchCredentials to obtain a JWT from your authentication service. - // See https://docs.powersync.com/installation/authentication-setup - // If you're using Supabase or Firebase, you can re-use the JWT from those clients, see - // - https://docs.powersync.com/installation/authentication-setup/supabase-auth - // - https://docs.powersync.com/installation/authentication-setup/firebase-auth + // Implement fetchCredentials to obtain a JWT from your authentication service. + // See https://docs.powersync.com/configuration/auth/overview return { endpoint: '[Your PowerSync instance URL or self-hosted endpoint]', - // Use a development token (see Authentication Setup https://docs.powersync.com/installation/authentication-setup/development-tokens) to get up and running quickly + // Use a development token (see Authentication Setup https://docs.powersync.com/configuration/auth/development-tokens) to get up and running quickly token: 'An authentication token' }; } @@ -223,7 +211,7 @@ export class Connector { // Implement uploadData to send local changes to your backend service. // You can omit this method if you only want to sync data from the database to the client - // See example implementation here: https://docs.powersync.com/client-sdk-references/javascript-web#3-integrate-with-your-backend + // See example implementation here: https://docs.powersync.com/client-sdks/reference/javascript-web#3-integrate-with-your-backend } } ``` @@ -233,15 +221,15 @@ export class Connector { Once the PowerSync instance is configured you can start using the SQLite DB functions. - **All CRUD examples from the JavaScript Web SDK apply**: The Capacitor SDK uses the same API as the Web SDK. See the [JavaScript Web SDK CRUD functions section](/client-sdk-references/javascript-web#using-powersync-crud-functions) for examples of `get`, `getAll`, `watch`, `execute`, `writeTransaction`, incremental watch updates, and differential results. + **All CRUD examples from the JavaScript Web SDK apply**: The Capacitor SDK uses the same API as the Web SDK. See the [JavaScript Web SDK CRUD functions section](/client-sdks/reference/javascript-web#using-powersync-crud-functions) for examples of `get`, `getAll`, `watch`, `execute`, `writeTransaction`, incremental watch updates, and differential results. The most commonly used CRUD functions to interact with your SQLite data are: -- [PowerSyncDatabase.get](/client-sdk-references/javascript-web#fetching-a-single-item) - get (SELECT) a single row from a table. -- [PowerSyncDatabase.getAll](/client-sdk-references/javascript-web#querying-items-powersync.getall) - get (SELECT) a set of rows from a table. -- [PowerSyncDatabase.watch](/client-sdk-references/javascript-web#watching-queries-powersync.watch) - execute a read query every time source tables are modified. -- [PowerSyncDatabase.execute](/client-sdk-references/javascript-web#mutations-powersync.execute) - execute a write (INSERT/UPDATE/DELETE) query. +- [PowerSyncDatabase.get](/client-sdks/reference/javascript-web#fetching-a-single-item) - get (SELECT) a single row from a table. +- [PowerSyncDatabase.getAll](/client-sdks/reference/javascript-web#querying-items-powersync.getall) - get (SELECT) a set of rows from a table. +- [PowerSyncDatabase.watch](/client-sdks/reference/javascript-web#watching-queries-powersync.watch) - execute a read query every time source tables are modified. +- [PowerSyncDatabase.execute](/client-sdks/reference/javascript-web#mutations-powersync.execute) - execute a write (INSERT/UPDATE/DELETE) query. ### Fetching a Single Item @@ -280,7 +268,7 @@ The [watch](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyn -For advanced watch query features like incremental updates and differential results, see [Live Queries / Watch Queries](/usage/use-case-examples/watch-queries). +For advanced watch query features like incremental updates and differential results, see [Live Queries / Watch Queries](/client-sdks/watch-queries). ### Mutations (PowerSync.execute, PowerSync.writeTransaction) @@ -332,16 +320,38 @@ logger.setLevel(LogLevel.DEBUG); ## Additional Resources -See the [JavaScript Web SDK reference](/client-sdk-references/javascript-web) for: +See the [JavaScript Web SDK reference](/client-sdks/reference/javascript-web) for: + +- [ORM Support](/client-sdks/orms/js/overview) +- [SPA Framework Integration](/client-sdks/reference/javascript-web#single-page-application-spa-frameworks) +- [Usage Examples](/client-sdks/usage-examples) +- [Developer Notes](/client-sdks/reference/javascript-web#developer-notes) + +## Upgrading the SDK -- [ORM Support](/client-sdk-references/javascript-web/javascript-orm/overview) -- [SPA Framework Integration](/client-sdk-references/javascript-web/javascript-spa-frameworks) -- [Usage Examples](/client-sdk-references/javascript-web/usage-examples) -- [Developer Notes](/client-sdk-references/javascript-web#developer-notes) +Run the below command in your project folder: + + + +```bash +npm upgrade @powersync/capacitor @powersync/web +``` + + +```bash +yarn upgrade @powersync/capacitor @powersync/web +``` + + +```bash +pnpm upgrade @powersync/capacitor @powersync/web +``` + + ## Troubleshooting -See [Troubleshooting](/resources/troubleshooting) for pointers to debug common issues. +See [Troubleshooting](/debugging/troubleshooting) for pointers to debug common issues. ## Supported Platforms diff --git a/client-sdk-references/dotnet.mdx b/client-sdks/reference/dotnet.mdx similarity index 81% rename from client-sdk-references/dotnet.mdx rename to client-sdks/reference/dotnet.mdx index 15510d05..61de10fc 100644 --- a/client-sdk-references/dotnet.mdx +++ b/client-sdks/reference/dotnet.mdx @@ -1,7 +1,7 @@ --- title: ".NET (alpha)" description: "SDK reference for using PowerSync in .NET clients." -sidebarTitle: Overview +sidebarTitle: ".NET" --- import DotNetInstallation from '/snippets/dotnet/installation.mdx'; @@ -22,7 +22,7 @@ import LocalOnly from '/snippets/local-only-escape.mdx'; A full API Reference for this SDK is not yet available. This is planned for a future release. - + Gallery of example projects/demo apps built with .NET PowerSync @@ -31,8 +31,6 @@ import LocalOnly from '/snippets/local-only-escape.mdx'; - - This SDK is currently in an [**alpha** release](/resources/feature-status). It is not suitable for production use as breaking changes may still occur. @@ -62,26 +60,23 @@ For more details, please refer to the package [README](https://github.com/powers -Next, make sure that you have: - -* Signed up for a PowerSync Cloud account ([here](https://accounts.journeyapps.com/portal/powersync-signup?s=docs)) or [self-host PowerSync](/self-hosting/getting-started). -* [Configured your backend database](/installation/database-setup) and connected it to your PowerSync instance. +**Prerequisites**: To sync data between your client-side app and your backend source database, you must have completed the necessary setup for PowerSync, which includes connecting your source database to the PowerSync Service and deploying Sync Rules (steps 1-4 in the [Setup Guide](/intro/setup-guide)). -### 1. Define the schema +### 1. Define the Client-Side Schema -The first step is defining the schema for the local SQLite database. +import SdkClientSideSchema from '/snippets/sdk-client-side-schema.mdx'; -This schema represents a "view" of the downloaded data. No migrations are required — the schema is applied directly when the local PowerSync database is constructed (as we'll show in the next step). + You can use [this example](https://github.com/powersync-ja/powersync-dotnet/blob/main/demos/CommandLine/AppSchema.cs) as a reference when defining your schema. -### 2. Instantiate the PowerSync Database +The types available are `text`, `integer` and `real`. These should map directly to the values produced by the [Sync Rules](/sync/rules/overview). If a value doesn't match, it is cast automatically. For details on how backend source database types are mapped to the SQLite types, see [Types](/sync/types). -Next, you need to instantiate the PowerSync database — this is the core managed database. +### 2. Instantiate the PowerSync Database -Its primary functions are to record all changes in the local database, whether online or offline. In addition, it automatically uploads changes to your app backend when connected. +Next, you need to instantiate the PowerSync database. PowerSync streams changes from your backend source database into the client-side SQLite database, based on your [Sync Rules](/sync/rules/overview). In your client-side app, you can read from and write to the local SQLite database, whether the user is online or offline. **Example**: @@ -138,18 +133,15 @@ The initialization syntax differs slightly between the Common and MAUI SDKs: ### 3. Integrate with your Backend -The PowerSync backend connector provides the connection between your application backend and the PowerSync client-side managed SQLite database. - -It is used to: +The PowerSync backend connector provides the connection between your application backend and the PowerSync client-side managed SQLite database. It is used to: 1. Retrieve an auth token to connect to the PowerSync instance. -2. Apply local changes on your backend application server (and from there, to your backend database) +2. Upload client-side writes to your backend API. Any writes that are made to the SQLite database are placed into an upload queue by the PowerSync Client SDK and automatically uploaded to your app backend (where you apply those changes to the backend source database) when the user is connected. Accordingly, the connector must implement two methods: -1. [PowerSyncBackendConnector.FetchCredentials](https://github.com/powersync-ja/powersync-dotnet/blob/main/demos/CommandLine/NodeConnector.cs#L50) - This is called every couple of minutes and is used to obtain credentials for your app backend API. -> See [Authentication Setup](/installation/authentication-setup) for instructions on how the credentials should be generated. -2. [PowerSyncBackendConnector.UploadData](https://github.com/powersync-ja/powersync-dotnet/blob/main/demos/CommandLine/NodeConnector.cs#L72) - Use this to upload client-side changes to your app backend. - -> See [Writing Client Changes](/installation/app-backend-setup/writing-client-changes) for considerations on the app backend implementation. +1. [PowerSyncBackendConnector.FetchCredentials](https://github.com/powersync-ja/powersync-dotnet/blob/main/demos/CommandLine/NodeConnector.cs#L50) - This method will be automatically invoked by the PowerSync Client SDK every couple of minutes to obtain authentication credentials. See [Authentication Setup](/configuration/auth/overview) for instructions on how the credentials should be generated. +2. [PowerSyncBackendConnector.UploadData](https://github.com/powersync-ja/powersync-dotnet/blob/main/demos/CommandLine/NodeConnector.cs#L72) - This method will be automatically invoked by the PowerSync Client SDK whenever it needs to upload client-side writes to your app backend via your backend API. Therefore, in your implememtation, you need to define how your backend API is called. See [Writing Client Changes](/handling-writes/writing-client-changes) for considerations on the app backend implementation. **Example**: @@ -191,13 +183,10 @@ public class MyConnector : IPowerSyncBackendConnector public async Task FetchCredentials() { try { - // Obtain a JWT from your authentication service. - // See https://docs.powersync.com/installation/authentication-setup - // If you're using Supabase or Firebase, you can re-use the JWT from those clients, see - // - https://docs.powersync.com/installation/authentication-setup/supabase-auth - // - https://docs.powersync.com/installation/authentication-setup/firebase-auth + // Implement fetchCredentials to obtain a JWT from your authentication service. + // See https://docs.powersync.com/configuration/auth/overview - var authToken = "your-auth-token"; // Use a development token (see Authentication Setup https://docs.powersync.com/installation/authentication-setup/development-tokens) to get up and running quickly + var authToken = "your-auth-token"; // Use a development token (see Authentication Setup https://docs.powersync.com/configuration/auth/development-tokens) to get up and running quickly // Return credentials with PowerSync endpoint and JWT token return new PowerSyncCredentials(_powerSyncUrl, authToken); @@ -267,13 +256,15 @@ public class MyConnector : IPowerSyncBackendConnector } ``` -With your database instantiated and your connector ready, call `connect` to start the synchronization process: +With your database instantiated and your connector ready, call `connect` to start syncing data with your backend: ```cs await db.Connect(new MyConnector()); await db.WaitForFirstSync(); // Optional, to wait for a complete snapshot of data to be available ``` + + ## Usage After connecting the client database, it is ready to be used. You can run queries and make updates as follows: @@ -464,7 +455,7 @@ class SyncProgressBar ## Monitor sync status by priority -When using [bucket priorities](/usage/use-case-examples/prioritized-sync), you can access priority-specific sync status information using [PriorityStatusEntries](https://github.com/powersync-ja/powersync-dotnet/blob/main/PowerSync/PowerSync.Common/DB/Crud/SyncStatus.cs) and [StatusForPriority](https://github.com/powersync-ja/powersync-dotnet/blob/main/PowerSync/PowerSync.Common/DB/Crud/SyncStatus.cs): +When using [bucket priorities](/sync/advanced/prioritized-sync), you can access priority-specific sync status information using [PriorityStatusEntries](https://github.com/powersync-ja/powersync-dotnet/blob/main/PowerSync/PowerSync.Common/DB/Crud/SyncStatus.cs) and [StatusForPriority](https://github.com/powersync-ja/powersync-dotnet/blob/main/PowerSync/PowerSync.Common/DB/Crud/SyncStatus.cs): **Version compatibility**: `PriorityStatusEntries` and `StatusForPriority()` are available since version 0.0.6-alpha.1 of the SDK. @@ -518,6 +509,29 @@ var db = new PowerSyncDatabase(new PowerSyncDatabaseOptions }); ``` +## Upgrading the SDK + +To upgrade to the latest version of the PowerSync package, run the below command in your project folder: + + + + ```bash + dotnet add package PowerSync.Common --prerelease + ``` + + + + ```bash + dotnet add package PowerSync.Common --prerelease + dotnet add package PowerSync.Maui --prerelease + ``` + + + + + Add `--prerelease` while this package is in alpha. To install a specific version, use `--version` instead: `dotnet add package PowerSync.Common --version 0.0.6-alpha.1` + + ## Supported Platforms See [Supported Platforms -> .NET SDK](/resources/supported-platforms#net-sdk). diff --git a/client-sdk-references/flutter.mdx b/client-sdks/reference/flutter.mdx similarity index 71% rename from client-sdk-references/flutter.mdx rename to client-sdks/reference/flutter.mdx index f238af3d..c011664e 100644 --- a/client-sdk-references/flutter.mdx +++ b/client-sdks/reference/flutter.mdx @@ -1,7 +1,7 @@ --- title: "Dart/Flutter" -description: "Full SDK reference for using PowerSync in Dart/Flutter clients" -sidebarTitle: Overview +description: "Full SDK guide for using PowerSync in Dart/Flutter clients" +sidebarTitle: "Dart/Flutter" --- import SdkFeatures from '/snippets/sdk-features.mdx'; @@ -20,7 +20,7 @@ import LocalOnly from '/snippets/local-only-escape.mdx'; Full API reference for the SDK - + Gallery of example projects/demo apps built with Flutter and PowerSync @@ -28,8 +28,6 @@ import LocalOnly from '/snippets/local-only-escape.mdx'; - - ### Quickstart Get started quickly by using the self-hosted **Flutter** + **Supabase** template @@ -41,9 +39,9 @@ Get started quickly by using the self-hosted **Flutter** + **Supabase** template - - Web support is currently in a beta release. Refer to [Flutter Web Support](/client-sdk-references/flutter/flutter-web-support) for more details. - + + Web support is currently in a beta release. Refer to [Flutter Web Support](/client-sdks/frameworks/flutter-web-support) for more details. + ## Installation @@ -51,11 +49,7 @@ Get started quickly by using the self-hosted **Flutter** + **Supabase** template ## Getting Started -Before implementing the PowerSync SDK in your project, make sure you have completed these steps: - -* Signed up for a PowerSync Cloud account ([here](https://accounts.journeyapps.com/portal/powersync-signup?s=docs)) or [self-host PowerSync](/self-hosting/getting-started). -* [Configured your backend database](/installation/database-setup) and connected it to your PowerSync instance. -* [Installed](/client-sdk-references/flutter#installation) the PowerSync Dart/Flutter SDK. +**Prerequisites**: To sync data between your client-side app and your backend source database, you must have completed the necessary setup for PowerSync, which includes connecting your source database to the PowerSync Service and deploying Sync Rules (steps 1-4 in the [Setup Guide](/intro/setup-guide)). For this reference document, we assume that you have created a Flutter project and have the following directory structure: @@ -75,14 +69,13 @@ lib/ ``` -### 1\. Define the Schema +### 1\. Define the Client-Side Schema -The first step is defining the schema for the local SQLite database. This will be provided as a `schema` parameter to the [PowerSyncDatabase](https://pub.dev/documentation/powersync/latest/powersync/PowerSyncDatabase/PowerSyncDatabase.html) constructor. - -This schema represents a "view" of the downloaded data. No migrations are required — the schema is applied directly when the PowerSync database is constructed. +The first step is to define the client-side schema, which refers to the schema for the managed SQLite database exposed by the PowerSync Client SDKs, that your app can read from and write to. The client-side schema is typically mainly derived from your backend source database schema and [Sync Rules](/sync/rules/overview), but can also include other tables such as local-only tables. Note that schema migrations are not required on the SQLite database due to the schemaless nature of the [PowerSync protocol](/architecture/powersync-protocol): schemaless data is synced to the client-side SQLite database, and the client-side schema is then applied to that data using _SQLite views_ to allow for structured querying of the data. The schema is applied when the local PowerSync database is constructed (as we'll show in the next step). -The types available are `text`, `integer` and `real`. These should map directly to the values produced by the [Sync Rules](/usage/sync-rules). If a value doesn't match, it is cast automatically. For details on how Postgres types are mapped to the types below, see the section on [Types](/usage/sync-rules/types) in the _Sync Rules_ documentation. + +The types available are `text`, `integer` and `real`. These should map directly to the values produced by the [Sync Rules](/sync/rules/overview). If a value doesn't match, it is cast automatically. For details on how backend source database types are mapped to the SQLite types, see [Types](/sync/types). **Example**: @@ -110,15 +103,13 @@ const schema = Schema(([ ])); ``` - + **Note**: No need to declare a primary key `id` column, as PowerSync will automatically create this. - + ### 2\. Instantiate the PowerSync Database -Next, you need to instantiate the PowerSync database — this is the core managed client-side database. - -Its primary functions are to record all changes in the local database, whether online or offline. In addition, it automatically uploads changes to your app backend when connected. +Next, you need to instantiate the PowerSync database. PowerSync streams changes from your backend source database into the client-side SQLite database, based on your [Sync Rules](/sync/rules/overview). In your client-side app, you can read from and write to the local SQLite database, whether the user is online or offline. To instantiate `PowerSyncDatabase`, inject the Schema you defined in the previous step and a file path — it's important to only instantiate one instance of `PowerSyncDatabase` per file. @@ -142,7 +133,9 @@ openDatabase() async { } ``` -Once you've instantiated your PowerSync database, you will need to call the [connect()](https://pub.dev/documentation/powersync/latest/powersync/PowerSyncDatabase/connect.html) method to activate it. This method requires the backend connector that will be created in the next step. +Once you've instantiated your PowerSync database, call the [connect()](https://pub.dev/documentation/powersync/latest/powersync/PowerSyncDatabase/connect.html) method to sync data with your backend. This method requires the backend connector that will be created in the next step. + + ```dart lib/main.dart {35} import 'package:flutter/material.dart'; @@ -191,18 +184,15 @@ class _DemoAppState extends State { ### 3\. Integrate with your Backend -The PowerSync backend connector provides the connection between your application backend and the PowerSync client-side managed SQLite database. +The PowerSync backend connector provides the connection between your application backend and the PowerSync client-side managed SQLite database. It is used to: -It is used to: - -1. [Retrieve an auth token](/installation/authentication-setup) to connect to the PowerSync instance. -2. [Apply local changes](/installation/app-backend-setup/writing-client-changes) on your backend application server (and from there, to your backend database) +1. Retrieve an auth token to connect to the PowerSync instance. +2. Upload client-side writes to your backend API. Any writes that are made to the SQLite database are placed into an upload queue by the PowerSync Client SDK and automatically uploaded to your app backend (where you apply those changes to the backend source database) when the user is connected. Accordingly, the connector must implement two methods: -1. [PowerSyncBackendConnector.fetchCredentials](https://pub.dev/documentation/powersync/latest/powersync/PowerSyncBackendConnector/fetchCredentials.html) \- This is called every couple of minutes and is used to obtain credentials for your app backend API. -> See [Authentication Setup](/installation/authentication-setup) for instructions on how the credentials should be generated. -2. [PowerSyncBackendConnector.uploadData](https://pub.dev/documentation/powersync/latest/powersync/PowerSyncBackendConnector/uploadData.html) \- Use this to upload client-side changes to your app backend. -\-> See [Writing Client Changes](/installation/app-backend-setup/writing-client-changes) for considerations on the app backend implementation. +1. [PowerSyncBackendConnector.fetchCredentials](https://pub.dev/documentation/powersync/latest/powersync/PowerSyncBackendConnector/fetchCredentials.html) \- This method will be automatically invoked by the PowerSync Client SDK every couple of minutes to obtain authentication credentials. See [Authentication Setup](/configuration/auth/overview) for instructions on how the credentials should be generated. +2. [PowerSyncBackendConnector.uploadData](https://pub.dev/documentation/powersync/latest/powersync/PowerSyncBackendConnector/uploadData.html) \- This method will be automatically invoked by the PowerSync Client SDK whenever it needs to upload client-side writes to your app backend via your backend API. Therefore, in your implememtation, you need to define how your backend API is called. See [Writing Client Changes](/handling-writes/writing-client-changes) for considerations on the app backend implementation. **Example**: @@ -215,23 +205,20 @@ class MyBackendConnector extends PowerSyncBackendConnector { MyBackendConnector(this.db); @override Future fetchCredentials() async { - // Implement fetchCredentials to obtain a JWT from your authentication service - // If you're using Supabase or Firebase, you can re-use the JWT from those clients, see - // - https://docs.powersync.com/installation/authentication-setup/supabase-auth - // - https://docs.powersync.com/installation/authentication-setup/firebase-auth - + // Implement fetchCredentials to obtain a JWT from your authentication service. + // See https://docs.powersync.com/configuration/auth/overview // See example implementation here: https://pub.dev/documentation/powersync/latest/powersync/DevConnector/fetchCredentials.html return PowerSyncCredentials( endpoint: 'https://xxxxxx.powersync.journeyapps.com', - // Use a development token (see Authentication Setup https://docs.powersync.com/installation/authentication-setup/development-tokens) to get up and running quickly + // Use a development token (see Authentication Setup https://docs.powersync.com/configuration/auth/development-tokens) to get up and running quickly token: 'An authentication token' ); } // Implement uploadData to send local changes to your backend service // You can omit this method if you only want to sync data from the server to the client - // See example implementation here: https://docs.powersync.com/client-sdk-references/flutter#3-integrate-with-your-backend + // See example implementation here: https://docs.powersync.com/client-sdks/reference/flutter#3-integrate-with-your-backend @override Future uploadData(PowerSyncDatabase database) async { // This function is called whenever there is data to upload, whether the @@ -268,10 +255,10 @@ Once the PowerSync instance is configured you can start using the SQLite DB func The most commonly used CRUD functions to interact with your SQLite data are: -* [PowerSyncDatabase.get](/client-sdk-references/flutter#fetching-a-single-item) \- get (SELECT) a single row from a table. -* [PowerSyncDatabase.getAll](/client-sdk-references/flutter#querying-items-powersync.getall) \- get (SELECT) a set of rows from a table. -* [PowerSyncDatabase.watch](/client-sdk-references/flutter#watching-queries-powersync.watch) \- execute a read query every time source tables are modified. -* [PowerSyncDatabase.execute](/client-sdk-references/flutter#mutations-powersync.execute) \- execute a write (INSERT/UPDATE/DELETE) query. +* [PowerSyncDatabase.get](/client-sdks/reference/flutter#fetching-a-single-item) \- get (SELECT) a single row from a table. +* [PowerSyncDatabase.getAll](/client-sdks/reference/flutter#querying-items-powersync.getall) \- get (SELECT) a set of rows from a table. +* [PowerSyncDatabase.watch](/client-sdks/reference/flutter#watching-queries-powersync.watch) \- execute a read query every time source tables are modified. +* [PowerSyncDatabase.execute](/client-sdks/reference/flutter#mutations-powersync.execute) \- execute a write (INSERT/UPDATE/DELETE) query. For the following examples, we will define a `TodoList` model class that represents a List of todos. @@ -371,15 +358,23 @@ Since version 1.1.2 of the SDK, logging is enabled by default and outputs logs f ## Additional Usage Examples -See [Usage Examples](/client-sdk-references/flutter/usage-examples) for further examples of the SDK. +See [Usage Examples](/client-sdks/usage-examples) for further examples of the SDK. + +## Upgrading the SDK + +To upgrade to a newer version of the PowerSync package, run the below command in your project folder: + +```bash +flutter pub upgrade powersync +``` ## ORM Support -See [Flutter ORM Support](/client-sdk-references/flutter/flutter-orm-support) for details. +See [Flutter ORM Support](/client-sdks/orms/flutter-orm-support) for details. ## Troubleshooting -See [Troubleshooting](/resources/troubleshooting) for pointers to debug common issues. +See [Troubleshooting](/debugging/troubleshooting) for pointers to debug common issues. ## Supported Platforms diff --git a/client-sdk-references/javascript-web.mdx b/client-sdks/reference/javascript-web.mdx similarity index 53% rename from client-sdk-references/javascript-web.mdx rename to client-sdks/reference/javascript-web.mdx index 5547fe64..fd99fa25 100644 --- a/client-sdk-references/javascript-web.mdx +++ b/client-sdks/reference/javascript-web.mdx @@ -1,7 +1,7 @@ --- title: "JavaScript Web" -description: "Full SDK reference for using PowerSync in JavaScript Web clients" -sidebarTitle: "Overview" +description: "Full SDK guide for using PowerSync in JavaScript Web clients" +sidebarTitle: "JavaScript Web" --- import SdkFeatures from '/snippets/sdk-features.mdx'; @@ -21,7 +21,7 @@ import LocalOnly from '/snippets/local-only-escape.mdx'; Full API reference for the SDK - + Gallery of example projects/demo apps built with JavaScript Web stacks and PowerSync @@ -29,8 +29,6 @@ import LocalOnly from '/snippets/local-only-escape.mdx'; - - ### Quickstart