You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: mkdocs/docs/api.md
+53-8Lines changed: 53 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1004,6 +1004,34 @@ To show only data files or delete files in the current snapshot, use `table.insp
1004
1004
1005
1005
Expert Iceberg users may choose to commit existing parquet files to the Iceberg table as data files, without rewriting them.
1006
1006
1007
+
<!-- prettier-ignore-start -->
1008
+
1009
+
!!! note "Name Mapping"
1010
+
Because `add_files` uses existing files without writing new parquet files that are aware of the Iceberg's schema, it requires the Iceberg's table to have a [Name Mapping](https://iceberg.apache.org/spec/?h=name+mapping#name-mapping-serialization) (The Name mapping maps the field names within the parquet files to the Iceberg field IDs). Hence, `add_files` requires that there are no field IDs in the parquet file's metadata, and creates a new Name Mapping based on the table's current schema if the table doesn't already have one.
1011
+
1012
+
!!! note "Partitions"
1013
+
`add_files`only requires the client to read the existing parquet files' metadata footer to infer the partition value of each file. This implementation also supports adding files to Iceberg tables with partition transforms like `MonthTransform`, and `TruncateTransform` which preserve the order of the values after the transformation (Any Transform that has the `preserves_order` property set to True is supported). Please note that if the column statistics of the `PartitionField`'s source column are not present in the parquet metadata, the partition value is inferred as `None`.
1014
+
1015
+
!!! warning "Maintenance Operations"
1016
+
Because `add_files` commits the existing parquet files to the Iceberg Table as any other data file, destructive maintenance operations like expiring snapshots will remove them.
1017
+
1018
+
!!! warning "Check Duplicate Files"
1019
+
The `check_duplicate_files` parameter determines whether the method validates that the specified `file_paths` do not already exist in the Iceberg table. When set to True (the default), the method performs a validation against the table’s current data files to prevent accidental duplication, helping to maintain data consistency by ensuring the same file is not added multiple times. While this check is important for data integrity, it can introduce performance overhead for tables with a large number of files. Setting check_duplicate_files=False can improve performance but increases the risk of duplicate files, which may lead to data inconsistencies or table corruption. It is strongly recommended to keep this parameter enabled unless duplicate file handling is strictly enforced elsewhere.
# A new snapshot is committed to the table with manifests pointing to the existing parquet files
1020
1048
```
1021
1049
1022
-
<!-- prettier-ignore-start -->
1050
+
Add files to Iceberg table with custom snapshot properties:
1023
1051
1024
-
!!! note "Name Mapping"
1025
-
Because `add_files` uses existing files without writing new parquet files that are aware of the Iceberg's schema, it requires the Iceberg's table to have a [Name Mapping](https://iceberg.apache.org/spec/?h=name+mapping#name-mapping-serialization) (The Name mapping maps the field names within the parquet files to the Iceberg field IDs). Hence, `add_files` requires that there are no field IDs in the parquet file's metadata, and creates a new Name Mapping based on the table's current schema if the table doesn't already have one.
1052
+
```python
1053
+
# Assume an existing Icebergtable object `tbl`
1026
1054
1027
-
!!! note "Partitions"
1028
-
`add_files`only requires the client to read the existing parquet files' metadata footer to infer the partition value of each file. This implementation also supports adding files to Iceberg tables with partition transforms like `MonthTransform`, and `TruncateTransform` which preserve the order of the values after the transformation (Any Transform that has the `preserves_order` property set to True is supported). Please note that if the column statistics of the `PartitionField`'s source column are not present in the parquet metadata, the partition value is inferred as `None`.
1055
+
file_paths = [
1056
+
"s3a://warehouse/default/existing-1.parquet",
1057
+
"s3a://warehouse/default/existing-2.parquet",
1058
+
]
1029
1059
1030
-
!!! warning "Maintenance Operations"
1031
-
Because `add_files` commits the existing parquet files to the Iceberg Table as any other data file, destructive maintenance operations like expiring snapshots will remove them.
1060
+
# Custom snapshot properties
1061
+
snapshot_properties = {"abc": "def"}
1032
1062
1033
-
<!-- prettier-ignore-end -->
1063
+
# Enable duplicate file checking
1064
+
check_duplicate_files = True
1065
+
1066
+
# Add the Parquet files to the Iceberg table without rewriting
1067
+
tbl.add_files(
1068
+
file_paths=file_paths,
1069
+
snapshot_properties=snapshot_properties,
1070
+
check_duplicate_files=check_duplicate_files
1071
+
)
1072
+
1073
+
# NameMapping must have been set to enable reads
1074
+
assert tbl.name_mapping() is not None
1075
+
1076
+
# Verify that the snapshot property was set correctly
Copy file name to clipboardExpand all lines: mkdocs/docs/configuration.md
+21-2Lines changed: 21 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -127,6 +127,7 @@ For the FileIO there are several configuration options available:
127
127
| s3.request-timeout | 60.0 | Configure socket read timeouts on Windows and macOS, in seconds. |
128
128
| s3.force-virtual-addressing | False | Whether to use virtual addressing of buckets. If true, then virtual addressing is always enabled. If false, then virtual addressing is only enabled if endpoint_override is empty. This can be used for non-AWS backends that only support virtual hosted-style access. |
129
129
| s3.retry-strategy-impl | None | Ability to set a custom S3 retry strategy. A full path to a class needs to be given that extends the [S3RetryStrategy](https://github.com/apache/arrow/blob/639201bfa412db26ce45e73851432018af6c945e/python/pyarrow/_s3fs.pyx#L110) base class. |
130
+
| s3.anonymous | True | Configure whether to use anonymous connection. If False (default), uses key/secret if configured or boto's credential resolver. |
130
131
131
132
<!-- markdown-link-check-enable-->
132
133
@@ -161,6 +162,7 @@ For the FileIO there are several configuration options available:
161
162
| adls.dfs-storage-authority | .dfs.core.windows.net | The hostname[:port] of the Data Lake Gen 2 Service. Defaults to `.dfs.core.windows.net`. Useful for connecting to a local emulator, like [azurite](https://github.com/azure/azurite). See [AzureFileSystem](https://arrow.apache.org/docs/python/filesystems.html#azure-storage-file-system) for reference |
162
163
| adls.blob-storage-scheme | https | Either `http` or `https`. Defaults to `https`. Useful for connecting to a local emulator, like [azurite](https://github.com/azure/azurite). See [AzureFileSystem](https://arrow.apache.org/docs/python/filesystems.html#azure-storage-file-system) for reference |
163
164
| adls.dfs-storage-scheme | https | Either `http` or `https`. Defaults to `https`. Useful for connecting to a local emulator, like [azurite](https://github.com/azure/azurite). See [AzureFileSystem](https://arrow.apache.org/docs/python/filesystems.html#azure-storage-file-system) for reference |
165
+
| adls.token | eyJ0eXAiOiJKV1QiLCJhbGci... | Static access token for authenticating with ADLS. Used for OAuth2 flows. |
| s3.secret-access-key | password | Configure the static secret access key used to access the FileIO. |
198
200
| s3.session-token | AQoDYXdzEJr... | Configure the static session token used to access the FileIO. |
199
201
| s3.force-virtual-addressing | True | Whether to use virtual addressing of buckets. This is set to `True` by default as OSS can only be accessed with virtual hosted style address. |
202
+
| s3.anonymous | True | Configure whether to use anonymous connection. If False (default), uses key/secret if configured or standard AWS configuration methods. |
200
203
201
204
<!-- markdown-link-check-enable-->
202
205
@@ -388,6 +391,7 @@ The RESTCatalog supports pluggable authentication via the `auth` configuration b
388
391
389
392
- `noop`: No authentication (no Authorization header sent).
| `auth.type` | Yes | The authentication type to use (`noop`, `basic`, or `custom`). |
418
+
| `auth.type` | Yes | The authentication type to use (`noop`, `basic`, `oauth2`, or `custom`). |
415
419
| `auth.impl` | Conditionally | The fully qualified class path for a custom AuthManager. Required if `auth.type` is `custom`. |
416
420
| `auth.basic` | If type is `basic` | Block containing `username` and `password` for HTTP Basic authentication. |
421
+
| `auth.oauth2` | If type is `oauth2` | Block containing OAuth2 configuration (see below). |
417
422
| `auth.custom` | If type is `custom` | Block containing configuration for the custom AuthManager. |
418
423
| `auth.google` | If type is `google` | Block containing `credentials_path` to a service account file (if using). Will default to using Application Default Credentials. |
419
424
@@ -436,6 +441,20 @@ auth:
436
441
password: mypass
437
442
```
438
443
444
+
OAuth2 Authentication:
445
+
446
+
```yaml
447
+
auth:
448
+
type: oauth2
449
+
oauth2:
450
+
client_id: my-client-id
451
+
client_secret: my-client-secret
452
+
token_url: https://auth.example.com/oauth/token
453
+
scope: read
454
+
refresh_margin: 60 # (optional) seconds before expiry to refresh
455
+
expires_in: 3600 # (optional) fallback if server does not provide
456
+
```
457
+
439
458
Custom Authentication:
440
459
441
460
```yaml
@@ -451,7 +470,7 @@ auth:
451
470
452
471
- If `auth.type` is `custom`, you **must** specify `auth.impl` with the full class path to your custom AuthManager.
453
472
- If `auth.type` is not `custom`, specifying `auth.impl` is not allowed.
454
-
- The configuration block under each type (e.g., `basic`, `custom`) is passed as keyword arguments to the corresponding AuthManager.
473
+
- The configuration block under each type (e.g., `basic`, `oauth2`, `custom`) is passed as keyword arguments to the corresponding AuthManager.
0 commit comments